Me and the mainframe

I recently wrote a sponsored blog for VirtualZ Computing, a startup involved in innovative mainframe software. As I was writing the post, I was thinking about the various points in my professional life where I came face-to-CPU with IBM’s Big Iron, as it once was called. (For what passes for comic relief, check out this series of videos about selling mainframes.)

My last job working for a big IT shop came about in the summer of 1984, when I moved across country to LA to work for an insurance company. The company was a huge customer of IBM mainframes and was just getting into buying PCs for its employees, including mainframe developers and ordinary employees (no one called them end users back then) that wanted to run their own spreadsheets and create their own documents. There were hundreds of people working on and around the mainframe, which was housed in its own inner sanctum, raised-floor series of rooms. I wrote about this job here, and it was interesting because it was the last time I worked in IT before switching careers for tech journalism.

Back in 1984, if I wanted to write a program, I had to first create them by typing out a deck of punch cards. This was done at a special station that was the size of a piece of office furniture. Each card could contain instructions for a single line of code. If you made a mistake you had to toss the card and start anew. When you had your deck you would then feed it into a specialized card reader that would transfer the program to the mainframe and create a “batch job” – meaning my program would then run sometime during the middle of the night. I would get my output the next morning, if I was lucky. If I made any typing errors on my cards, the printout would be a cryptic set of error messages, and I would have to fix the errors and try again the next night. Finding that meager output was akin to getting a college rejection letter in the mail – the acceptances would be thick envelopes. Am I dating myself enough here?

Today’s developers probably are laughing at this situation. They have coding environments that immediately flag syntax errors, and tools that dynamically stop embedded malware from being run, and all sorts of other fancy tricks. if they have to wait more than 10 milliseconds for this information, they complain how slow their platform is. Code is put into production in a matter of moments, rather than the months we had to endure back in the day.

Even though I roamed around the three downtown office towers that housed our company’s workers, I don’t remember ever stepping foot in our Palais d’mainframe. However, over the years I have been to my share of data centers across the world. One visit involved turning off a mainframe for Edison Electric Institute in Washington DC in 1993, where I wrote about the experience and how Novell Netware-based apps replaced many of its functions. Another involved moving a data center from a basement (which would periodically flood) into a purpose-built building next door, in 2007. That data center housed more souped-up microprocessor-based servers which would form the beginnings of massive CPU collections that are used in today’s z Series mainframes BTW.

Mainframes had all sorts of IBM gear that required care and feeding, and lots of knowledge that I used to have at my fingertips: I knew my way around the proprietary protocols called Systems Network Architecture and proprietary networking protocols called Token Ring, for example. And let’s not forget that it ran programs written in COBOL, and used all sorts of other hardware to connect things together with proprietary bus-and-tag cables. When I was making the transition to PC Week in the 1980s, IBM was making the (eventually failed) transition to peer-to-peer mainframe networking with a bunch of proprietary products. Are you seeing a trend here?

Speaking of the IBM PC, it was the first product from IBM that was built with spare parts made by others, rather than its own stuff. That was a good decision, and this was successful because you could add a graphics card (the first PCs just did text, and monochrome at that) or extra memory or a modem. Or a adapter card that connected to another cabling scheme (coax) that turned the PC into a mainframe terminal. Yes, this was before wireless networks became useful, and you can see why.

Now IBM mainframes — there are some 10,000 of them still in the wild — come with the ability to run Linux and operate across TCP/IP networks, and about a third of them are running Linux as their main OS. This was akin to having one foot in the world of distributed cloud computing, and one foot back in the dinosaur era. So let’s talk about my client VirtualZ and where they come into this picture.

They created software – mainframe software – that enabled distributed applications to access mainframe data sets, using OpenAPI protocols and database connectors. The data stays put on the mainframe but is available to applications that we know and love such as Salesforce and Tableau.  It is a terrific idea, just like the original IBM PC in that it supports open systems. This makes the mainframe just another cloud-connected computer, and shows that the mainframe is still an exciting and powerful way to go.

Until VirtualZ came along, developers who wanted access to mainframe data had to go through all sorts of contortions to get it — much like what we had to do in the 1980s and 1990s for that matter. Companies like Snowflake and Fivetran made very successful businesses out of doing these “extract, transfer and load” operations to what is now called data warehouses. VirtualZ eliminates these steps, and your data is available in real time, because it never leaves the cozy comfort of the mainframe, with all of its minions and backups and redundant hardware. You don’t have to build a separate warehouse in the cloud, because your mainframe is now cloud-accessible all the time.

I think VirtualZ’s software will usher in a new mainframe era, an era that puts us further from the punch card era. But it shows the power and persistence of the mainframe, and how IBM had the right computer, just not the right context when it was invented, for today’s enterprise data. For Big Iron to succeed in today’s digital world, it needs a lot of help from little iron.

7 thoughts on “Me and the mainframe

  1. I was a headhunter in Silicon Valley starting in 1979 and experienced the shift from mainframes to peer-to-peer and PCs. In fact, my first PC was an IBM with 2 floppy drives and no communication at all. Oh, what we could do with 64K RAM! I remember a guy in J&J’s data center dealing with a flood beneath the raised floor by ordering staff to go to the warehouse for help. They came back with loads of Pampers to absorb the water. One of my biggest clients was Memorex — which made IBM-compatible disk packs the size of refrigerators. An engineering VP once explained to me that these behemoths could store 5MB! Thanks for the memories. It’s no surprise that there’s still a place for the Big Iron.

  2. David,

    Brings back all the memories for me when you penned the Forbes article about using the “Irma” card for data transfer between PC and mainframes.

    Memories!!!

    Mike

  3. David, I am compelled to comment on the “iron-y” of your article. And whatever happened to the color coded trousers, white shirts and ties of those IBM guys? I still remember our CPA firm purchasing a new system and the curious lttle computer they gave us for free to try out. Little did we know it would soon revolutionize our work place. And the punch card deck, what memories. Best, John

  4. Punched my own cards for programs that ran on various mainframes and smaller computers for many years. Bought my own IBM PC clone for about $5000 from PC Designs, and I had to assemble it myself. Got an NEC multisync monitor, the best at the time. Those were the days, the days of computer operators in white coats in the computer room. I also had access to a university computer, a GE-225, after the work day finished in the ’60s. Wrote a somewhat brain-dead operating system that eliminated the use of loading programs from punched cards. Never did get into the mainstream of IBM systems, but got close with some projects.

  5. oh yes the good old days of the mainframes- IBM and Tandem(nonStop) in
    BFA and Airlines/ADP Ross Perot- started in 1980s
    TSO/ISPF- time sharing option/ Interactive System Productivity Facility
    I still remembered in 1999 , I was working as a tech support for
    PCs/Mac and was a new hired.
    A bunch of so called MCSEs could not solve this printing issue on this
    lady’s billing computer and they even tried rebuilding the server and
    spent 6mths trying to solve the bad format printing issue.
    I found out that the billing program was an emulation program to the
    IBM mainframe to a 3rd party and I fixed it in less than 3 mins, took
    them more than 6mths to figure it out. They thought it was directly
    hooked to the local printer but I tried to explain to them that it is
    an emulation program. Whatever!!!!. Man, on that day, I was highly
    respected and kudos and chocolate were given to me.
    I still have many 512K, SEs mac in my basement and crank them up once a while.–floppy discs

  6. I appreciated the reference to Bob Hoey and Tim Washer’s Art of the Sale videos. I hadn’t watched them in years and forgot how funny and breakthrough they were for stodgy old IBM at the time.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.