ITexpertVoice.com: Server virtualization is the new clustering

Clustering has been around since almost the earliest PC and mainframe days. But a new take on clustering is emerging that leverages virtualization tools and is becoming more appealing, particularly as enterprise IT shops gain more experience using virtual servers and as the virtualization vendors add more high-availability features to their products.

A combination of services including high availability, virtual storage management and near-term server failover that were previously only the province of very expensive and customized clustered configurations are now available in the virtual world and can serve as a good substitute for many enterprise’s disaster recovery (DR) applications, too. This is because virtual machines are easily portable and replicated across the Internet, so you can quickly get a secondary site up and running when the primary server has failed. “We have seen disaster recovery protection now available to a whole class of customers that couldn’t do it before,” says Bob Williamson, an executive VP with Steeleye Technology Inc., a specialized virtualization vendor. ”In the past, you needed to buy another physical server and have it ready if the primary machine went down. But by using virtualization and hosting these servers at a remote location, enterprises can use these machines if their datacenter goes out. That lowers the entry cost for deploying wider-area disaster recovery, and opens up this protection to a whole new set of companies that haven’t been able to consider it before.”

In the past year, the three major virtualization vendors – Microsoft, VMware and Citrix/Xen – each have strengthened their ability to provide more capable DR and business continuity services in their products. These have lots of appeal for enterprises that previously would have either considered a full DR solution or clustering too expensive.

It is possible using these newer tools to replicate and bring up a new instance of Windows Server 2008 in a few milliseconds, for example, for situations where you may need to provide additional capacity on an overloaded server or in the case of planned upgrades. Take a server farm with a dozen machines all delivering a Web application as an example. If an enterprise has designed things for peak load performance, then there are going to be plenty of other times when many of these machines are doing little or no work. The ideal solution would be able to spin up or spin down new instances of application servers when these loads change, to match a particular service delivery metric and to keep the costs of power and cooling to a minimum, too.

These solutions aren’t appropriate for transaction processing applications where immediate failover is required to handle things like online payments processing or airline reservations. “There are still times when you need clustering, such as when you can’t afford to lose a single transaction and have to restart this transaction on the new machine after a failover,” says Carl Drisko, an executive and data center evangelist at Novell. “If your virtual machine goes down, anything that is being processed in memory is going to be lost.” But the high-availability virtualized applications can work for less demanding applications, such as enterprise email servers.

One of the issues with earlier custom clustering solutions is that they require identical hardware and operating system versions for each physical machine that was part of the cluster: virtualized servers are more forgiving and flexible, not to mention less expensive too. Microsoft’s HyperV, for example, now supports the ability to migrate a running virtual server to a new physical host that even has a different processor family, such as moving from an Intel-based server to one running on an AMD processor.

Another issue is that many of the older-style clusters required very high-speed links to tie the members of the cluster together: virtualized solutions are also less demanding of connectivity and can make do with longer latency connections, even across typical Internet lines.

As these ‘almost-clustering’ solutions become more popular, look towards increasing sophistication of third-party monitoring vendors to help provide a complete solution. For example, Lyonesse Software’s Double-Take, Steeleye’s LifeKeeper, Symantec’s Veritas Application Director and Cassatt’s Active Response can monitor both physical and virtual applications on running virtual servers, and notify IT staff when a host or application running on a virtual server fails, so that a new virtual instance can be quickly brought online.

All this means that virtualization and clustering will become more interrelated and complementary solutions for IT managers. While the two technologies have come from different heritages and infrastructures, they are now merging and providing a powerful tool for managing more complex workloads in the data center.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.