The Cloud-Ready Mainframe: Extending Your Data’s Reach and Impact

(This post is sponsored by VirtualZ Computing)

Some of the largest enterprises are finding new uses for their mainframes. And instead of competing with cloud and distributed computing, the mainframe has become a complementary asset that adds new productivity and a level of cost-effective scale to existing data and applications. 

While the cloud does quite well at elastically scaling up resources as application and data demands increase, the mainframe is purpose-built for the largest-scale digital applications. But more importantly, it has kept pace as these demands have mushroomed over its 60-year reign, and why so many large enterprises continue to use them. Having them as part of a distributed enterprise application portfolio could be a significant and savvy use case, and be a reason for increasing their future role and importance.

Estimates suggest that there are about 10,000 mainframes in use today, which may not seem a lot except that they can be found across the board in more than two-thirds of Fortune 500 companies, In the past, they used proprietary protocols such as Systems Network Architecture, had applications written in now-obsolete coding languages such as COBOL, and ran on custom CPU hardware. Those days are behind us: instead, the latest mainframes run Linux and TCP/IP across hundreds of multi-core  microprocessors. 

But even speaking cloud-friendly Linux and TCP/IP doesn’t remove two main problems for mainframe-based data. First off, many mainframe COBOL apps are their own island, isolated from the end-user Java experience and coding pipelines and programming tools. To break this isolation usually means an expensive effort to convert and audit the code. 

A second issue has to do with data lakes and data warehouses. These applications have become popular ways that businesses can spot trends quickly and adjust IT solutions as their customer’s data needs evolve. But the underlying applications typically require having near real-time access to existing mainframe data, such as financial transactions, sales and inventory levels or airline reservations. At the core of any lake or warehouse is conducting a series of “extract, transform and load” operations that move data back and forth between the mainframe and the cloud. These efforts only transform data at a particular moment in time, and also require custom programming efforts to accomplish.

What was needed was an additional step to make mainframes easier for IT managers to integrate with other cloud and distributed computing resources, and that means a new set of software tools. The first step was thanks to initiatives such as the use of IBM’s z/OS Connect. This enabled distributed applications to access mainframe data. But it continued the mindset of a custom programming effort and didn’t really provide direct access to distributed applications.

To fully realize the vision of mainframe data as equal cloud nodes required a major makeover, thanks to companies such as VirtualZ Computing. They latched on to the OpenAPI effort, which was previously part of the cloud and distributed world. Using this protocol, they created connectors that made it easier for vendors to access real-time data and integrate with a variety of distributed data products, such as MuleSoft, Tableau, TIBCO, Dell Boomi, Microsoft Power BI, Snowflake and Salesforce. Instead of complex, single-use data transformations, VirtualZ enables real-time read and write access to business applications. This means the mainframe can now become a full-fledged and efficient cloud computer. 

VirtualZ CEO Jeanne Glass says, “Because data stays securely and safely on the mainframe, it is a single source of truth for the customer and still leverages existing mainframe security protocols.” There isn’t any need to convert COBOL code, and no need to do any cumbersome data transformations and extractions.

The net effect is an overall cost reduction since an enterprise isn’t paying for expensive high-resource cloud instances. It makes the business operation more agile, since data is still located in one place and is available at the moment it is needed for a particular application. These uses extend the effective life of a mainframe without having to go through any costly data or process conversions, and do so while reducing risk and complexity. These uses also help solve complex data access and report migration challenges efficiently and at scale, which is key for organizations transitioning to hybrid cloud architectures. And the ultimate result is that one of these hybrid architectures includes the mainframe itself.

How EVault’s Hybrid Cloud Backup compares with CommVault

EVault’s Hybrid Cloud Data Protection covers a wider range of operating systems, features and enterprise applications than CommVault’s Simpana. EVault’s web-based portal is also more flexible and useful too. We tested these backup products during November 2015, connecting to a SQL Server instance running on a Windows Server 2008.

Click here for more info.

A Guided Tour of the SANsymphony-V Software Defined Storage Platform From DataCore Software

DataCore’s storage virtualization software, SANsymphony-V, maximizes the availability, performance and utilization of disks in data centers large and small.  Use it to manage on-premises storage or build a cloud storage infrastructure.

We looked at version 9 in June 2012 and version 10 in May 2015.
http://www.datacore.com
Pricing: DataCore-authorized solution providers offer packages starting under $10K for a two-node, high-availability environment.
Requirements: Windows Server 2012 R2

Take a look at another screencast review of Datacore’s software-defined storage solution here.

Hyper-Converged Storage from DataCore Virtual SAN Software

DataCore’s comprehensive storage services stack has long been known for harnessing ultra-fast processors and RAM caches in x86 servers, for superior performance and enterprise-class availability. It now comes in a compact, hyper-converged package that is ideal for transactional databases and mixed workloads. DataCore Virtual SAN software is available for a free 30-day trial. It runs on any hypervisor and your choice of standard servers.

We tested DataCore Virtual SAN in May 2015.

Pricing:  DataCore-authorized solution providers offer software packages starting under $10,000 for a two-node, high-availability cluster, including annual 24×7 support.
Requirements: Windows Server 2012 R2

For information on DataCore’s SANsymphony-V Software-defined Storage Platform, check out our other video here.

And for a copy of our white paper on hyper-converged storage, download our paper here.

 

Box turns the API world inside-out

You might not have seen the news last week from Box, the online storage service. There are two items. First is about Box’s new developer edition, announced at its annual conference. What is significant is that this is the first time, to my knowledge, that a software developer has made it easier to embed its app inside other apps. Let’s see what they did and why it is important.

Many software vendors have spent time developing application programming interfaces or APIs that make it easier for third parties to have access to their apps or data that they collect. These days it is hard to find a vendor that doesn’t offer an API, and Box has done a terrific job with its own APIs to be sure. They have created a developer community of tens of thousands of people who write programs using them.

These programs make it easy to fax a document from within Box via an Internet faxing service, add digital signatures inside a document, make small changes to a document, and so forth. The idea is to manipulate a document that is inside the Box cloud storage system, so that their cloud can become more valuable than the dozens or hundreds of other cloud-based storage providers that are available. Without access to its APIs, a third party has to first move the document out of Box, make these changes, and then move it back to its repository. That takes time and uses computer resources.

But the developer edition turns this notion on its head, or should I say goes inside the Box. What they are trying to do now is allow apps to use a set of Box features, but doing so inside your own app. Instead of accessing APIs so you can manipulate particular documents, you can make use of Box’s security routines, or storage routines, or other basic functionality, so that you don’t need to invent this functionality from scratch for your own particular app. What are some of the features that are offered? According to the announcement, these include: “full text search, content encryption, advanced permissions, secure collaboration, and compliance.” That is a lot of stuff that an independent software developer doesn’t have mess with, which means that new apps could be written more quickly.

On top of the developer edition, Box also announced its own Javascript libraries that anyone can use to get started on coding some of these features, called T3. They had posted a few snippets of code on this website showing you how you can construct a Todo list. While JS frameworks are numerous, this one might be interesting, particularly in light of the developer announcement.

Certainly, online storage is undergoing its own evolutionary moment. Google is now charging a penny a GB per month for near-line storage, promising to retrieve your files in seconds. Of course, they and other cloud providers are (so far) just a repository, and that is the line in the cloud that Box is trying to draw with these announcements.

If it all works out, we’ll see Box become the center of a new universe of apps that can take collaboration to the next level, because the folks at Box have already built a collaboration environment that they use for their own customers. It is gutsy, because a Box-like competitor could make use of these features and out-Box Box (which is one reason that Box will control who has access to its tools for now).

It could backfire: developers are a funny bunch, and many of them like reusing someone else’s code but maybe not to the level that Box requires. It certainly is a different model, and one that will take some getting used to. But the proof is in the pudding, and we’ll see in the coming months if anyone’s code turns out to be noteworthy.

VMware blog: Simplifying Storage Solutions

Storage has seen its share of technology changes in recent years, but the most significant breakthrough isn’t higher capacity arrays, it’s the shift to software-defined storage. One of the reasons many enterprises are embracing this new paradigm is that in recent decades, managing storage has been a specialized skill set which has fostered organizational silos among other issues.

In this free e-Book that I wrote for VMware, I explore:

  • How virtualization and cloud management impact storage management
  • Implications of the control plane transitioning from hardware-centric to app-centric
  • The role of VMware hypervisor in managing storage

ITworld: Virtual storage roadmap

Tintri-per-VM-latency-end-to-endWhen you have a lot of virtual machines, managing your storage needs and ensuring that your environment is optimized to deliver sufficient performance and reliability is a challenge. VMs can greatly increase storage by several orders of magnitude, and specialized VM storage repositories (such as this one from Tintri, the console shown at left) are needed to keep things under control and increase productivity. There are several interesting directions and technology advances in this market, including so-called storage hypervisor software tools, new storage appliances that are VM-centric, and better storage management features from the traditional ecosystem vendors.

Here is the paper that I wrote on the topic.

Orchestrating your Disaster Recovery with QuorumLabs onQ

A complete disaster recovery appliance that can automate the protection of any any Windows 2003 or later server OSs. We tested v3.2 on a small test network of both physical and virtual machines in December 2011.


Price: Starts at $10k per appliance and includes three protected servers, additional servers and quickstart installation or upgraded hardware is extra.

QuorumLabs Inc.
http://quorumlabs.com
510 257-5227
info@quorumlabs.com

New features of Symantec Backup Exec 2010

Symantec has a new R2 version of its Backup Exec 2010 backup software that is easier to install, quicker to make backups, and a raft of new features that include better support for virtualization, archiving, and deduplication.

Symantec Corporation
Mountain View, Calif.
http://backupexec.com
Pricing: $1174 for one media server, deduplication and archiving options extra

Better backups, faster restores with SEPATON DeltaStor deduplication technology

SEPATON’s S2100 is a virtual tape library backup appliance that can work to significantly reduce backup completion and restore times and cut down on storage requirements. It has a flexible capacity to hold from 10TB to over 1 PB and a wide collection of policies that can be crafted to particular applications and circumstances.

We tested a unit on a live network with actual production data with Firefox running on Windows XP in March 2010.

Price: starts at $110,500.
Backup products supported include Symantec’s NetBackup, IBM Tivoli
Storage Manager, EMC NetWorker and HP Data Protector.

SEPATON S2100-ES2 v5
400 Nickerson Road
Marlborough, MA 01752
Sepaton.com
508 490 7900