Wednesday, September 13, 2006

Death of the Mainframe

A common topic of discussion among Cobol programmers is the impending doom of programming in a language that is obsolete and on a platform that has no future. Is the mainframe dying? I don't think so. Of course, I could just be telling myself that so I don't have to find a different career path, but I really don't think so.

One of my last college classes was Software Engineering. This class was taught by an old Cobol dinosaur, he had been involved in coding for a major school district back in the 70's. He told us something that I thought was a load of B.S. since he said it and have found that to be the case since then. He told us that in the software development cycle there was a step of optimization. However he also told us that optimization isn't nearly as important now as it once was, with the proliferation of 3+ Ghz processors and gigabytes of RAM. Apparently he forgot what it was like to code a system that process millions of records a day. At my last job I had a program that processed anywhere between 20,000 and 400,000 records a day. I had a fairly extensive editing program to verify certain data fields and a few routines that could actually correct certain fields if we had a history with that client. On my first implementation, I used a test file of 10,000 records and the program was completing in about 10 minutes. I soon realized that that might be acceptable for the 20,000 records a day, but quite unacceptable for the 400,000 record days. After optimizing my code, I got the program run time down to about 1 minute for the 10,000 records. If I had followed his advice, that program would have taken up to 7 hours to run on some nights. Now, at the most, it runs in under 45 minutes.

These coders programming for webservers have it easy in comparison. How much data does a typical webform involve? I'm guessing around 450 bytes would be a high average. That would give you plenty of room for Name, Address, City, State, Zip, Country, Username, Password, and even throw in a phone number or two. Also remember that although the webserver can be serving more than 1 customer at a time, the program itself is processing basically one record per run. The main datafile that I'm working with right now is a 9500 byte record. Multiply that by about 3 million records a day and you can easily see that processing records like this would bring the normal server to it's knees. That's why I don't see the mainframe dying anytime soon.

I know there will be people that argue, You're talking about batch processing, batch processing is a fading trend, the future is online processing. True, perhaps someday we can figure out how to switch all this over to online processing and not have a need for batch except for reports and such. I guess I'll get more experience writing report software then.

Friday, September 08, 2006

Documentation

So, you've got an assignment to work on a project. What is the first thing you do? At my old job it would have been to find the programs that are affected by the project and start fixing them. Now, it's different.

We first create a Functional Design document. This basically tells everything that we are going to need to do in order to implement whatever the change is. It will include a list of programs, copybooks, sysins, jcl, procs, or anything else that will need to be changed. Then you create a methodology to go with it. This is where it gets tricky. The methodology is supposed to be done before you make any code changes. The methodology is also supposed to detail the code changes you need to make. Now, I see this as a paradox. I'm supposed to figure out exactly what the code changes are, but I'm not supposed to actually make any? So, what we usually do is copy the code out to a temp library, make the changes, test the changes, then write the methodology. Then, if the Functional Design and Methodology get approved, the coding is usually done at that point. Maybe a little cleaning up, or maybe fixing a bug or two you discover while creating exhibits, but for the most part, you're done.

It all seems like quite a pain at first, but what we end up with is a very thorough set of documentation for every program we have. There is a brief description written for every new program, and every change is documented within these FD's and Methodology documents. This makes it wonderful when you have to dive back into them a couple years or even months down the road.

In the Beginning

So I was searching around for a Cobol blog today and couldn't find any. Figure I'd start one myself. I'm not your average Cobol programmer, for one, I'm only 36 years old. I got started in Cobol about 5 years ago. Started working for a government agency that had all Cobol programs except for a couple that were still written in Assembly.

What can I say about Cobol? I love it. It's simplistic, it's fun, it's a great language when used in the right conditions. For the kind of processing that I do, it simply works. What I plan to do with this blog is post something about Cobol at least once a week, plus I'll probably post some other general programming stuff from time to time too, as well as a little bit of personal stuff. Stay tuned, more will be coming shortly...