Baseline: Managing your hypervisors

Virtual machine (VM) technology is becoming more popular and proliferating across enterprise data centers. Server consolidation, energy savings, and better resource utilization are all good reasons to consider using one physical server running a series of guest VMs. But as you dive deeper into VM technology, especially for virtualizing servers, you need to have a better understanding of the issues involved, particularly as you reach the point where you run your VMs on what is called bare-metal hypervisors. These are the small-footprint bootable versions that are designed to run dozens of VM guests without first installing any other operating system.  Popular products include Microsoft’s HyperV Server, VMware’s ESX, and Citrix’ Xen Center.

 

Why use this technology? Several reasons. First, the hypervisors can take advantage of more RAM, the lifeblood of virtualization. Unless you are running a 64-bit operating system to host your guest VMs, you are limited to 4 GB of memory. Second, with the higher memory capacity comes the promise of improved performance and greater machine consolidation, along with the ability to better manage the guest VMs that are running in each physical server. Chris Wolf, an analyst with the Burton Group, says: “It makes sense to use hypervisors for very dense server environments, say for 50 or more VMs per physical server. But it can also work for smaller environments too.”

 

But the bare-metal hypervisors come with their own complexities. First is whether they can be managed in the typical dense enterprise environments, including adding new VM guests to a physical server, converting existing physical machines into VM guests, and doing routine OS upgrades and maintenance on the guest machines. Wolf recently assembled a long list of requirements to help enterprises select the most appropriate hypervisor and notes that “only VMware’s ESX satisfies all of our criteria. All of the others are missing some elements, such as live migration in Microsoft’s HyperV, enterprise-class support lacking in Virtual Iron, and no role-based access controls in Citrix’s Xen Server.” He recommends looking carefully at your own needs before picking the particular vendor and especially focusing on these management and more mundane tasks.

Second is that there are subtle differences between hypervisor and non-hypervisor environments, even from the same vendor. “You need to keep in mind that there are different sets of features between the various versions of VMware workstation and ESX. There have been occasions where a new feature in VM workstation was not yet available on ESX,” says James Sokol, Senior VP and CTO for The Segal Group, Inc., a benefits consultancy based in New York City. Segal uses 12 physical servers running ESX with an average of 15 guests running on each.  To complicate matters even further, VMware makes two different versions of ESX: a free version ESXi that has a smaller memory footprint and comes pre-installed on some servers and lacks the service console management and Web access that comes with the full-blown, and fee-based ESX version.

 

Microsoft’s HyperV comes in two versions as well, one that comes as part of its 64-bit Windows Server 2008 OS and one called HyperV Server that is an independent free version that is labeled as a bare-metal hypervisor but is basically just a command-line version of Windows to run its guest VMs on it. However, many people find that HyperV Server isn’t much of an operating system, particularly for denser virtualized setups that require multiple network cards and storage adapters. “We found the HyperV Server to be next to impossible to configure, since we use 10 adapters in three-node clusters on each of our servers,” said Frank Smith, an IT Manager at Lionbridge Technologies Inc. based in Waltham, Mass. They ended up using the built-in HyperV technology in Windows Server 2008, which is much easier to configure and “requires very little understanding of the underlying physical architecture” and are now running 20 physical servers and with up to 25 guest VMs per server.

 

A third issue is making sure that you have right hardware to run your hypervisor, because they can be very picky about the processor and other internals. Part of this decision is choosing the right CPU family, because there are differences in how Intel and AMD chip sets deal with hypervisors and “there is no compatibility between the two platforms,” says Wolf. “If you start using Intel for your virtual servers, you should stick with that processor family as you add new physical servers,” he says. In addition to the CPU, any hypervisor needs to have virtualization-friendly BIOS enabled as well, and most of the modern servers from the major vendors include this. “A lot of our older servers couldn’t run HyperV and we ended up having to buy some new hardware,” says Smith.

 

One way to go is to buy servers with the hypervisors pre-installed. This option is attractive in that you are guaranteed that the server has been optimized for the hypervisor. Both ESX and Xen Server come pre-installed from a number of vendors including Dell PowerEdge R and M series, HP ProLiant DL and BL series, and IBM’s BladeCenter HS21 XM. Wolf says, “the uptake on pre-installed hypervisors hasn’t been as much as the vendors predicted, but it makes a lot of sense.”  While Sokol hasn’t done this yet, he is considering it for his future server purchases at Segal.

 

You also want to make sure that your server has plenty of room to install additional RAM DIMM sticks and enough PCI slots for additional network and storage adapters. “We wanted a very high guest VM density, so we ended up having to buy new servers that could support 32 DIMM slots and lots of PCI cards,” says Smith.

 

Sokol points out that good hypervisor planning means balancing the number of guest VMs with bulking up on RAM that is required to best provision each guest VM. “You want to run as many guests per host to control the number of host licenses you need to purchase and maintain. We utilize servers with dual quad-core CPUs and 32GB of RAM to meet our hosted server requirements.” Smith says a good rule of thumb for Windows guest VMs is to use a gigabyte of RAM for every guest VM that he runs on his servers.

Another issue certainly is cost, and the tricky part is balancing the more expensive hardware that a beefier server entails with the additional licensing fees that will be required to run your guest VMs. VMware has one of the most complex product pricing sheets of the virtualization vendors, and “A single physical server running ESX is going to run $5,000 in up-front licensing fees by the time you pay for all of the associated VMware management and high-availability suite of software. Then you have to add the cost of the server hardware and guest VM operating systems on top of that too,” says John Pozadzides, the chief marketing officer of Plano, Texas-based managed services provider Layered Technologies. He and others recommend looking at using Microsoft’s Windows Server Datacenter license because it includes the ability to have unlimited Windows VMs running as guests, and that makes it very attractive for virtual environments.

 

The next issue with hypervisors is what kind of attached storage devices is critical in terms of delivering the best performance of your VM guests. Segal’s Sokol uses fibre-attached technology adapted (FATA) disks as second tier storage and found that “FATA performance wasn’t good for running our transactional-based disaster recovery servers.”

 

Karen Rhodes, a senior sales engineer of Layered Technologies, says

“You need to spend more time thinking about your storage needs and get the best performing disk solution possible, because it will be spinning constantly as a result of the increased traffic from all the VM guests.”

 

Finally, there are a series of specialized tools that are useful to convert physical to virtual machines, and manage the overall virtual server environment. Segal uses NSI’s DoubleTake for VM Infrastructure “to replicate our VMs over our wide area network and into our offsite disaster recovery center,” says Sokol. Tim Suttle, the network and technical operations director for non-profit services firm TechSoup Global based in San Francisco uses VizionCore’s vRanger and vFoglight tools to manage and monitor their ESX servers. “It gives us deeper insight into potential performance bottlenecks and shows us a single pane to better monitor our entire environment.”

Another product is Novell’s PlateSpin. Rhodes says, “PlateSpin can be used to migrate any physical server to a variety of virtual environments including ESX, Xen Center, Sun and HyperV. You don’t have to tie yourself to any one particular vendor and it is very robust and mature technology.” PlateSpin can also be used to convert virtual machines into physical ones, which are useful for debugging operating system issues.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.