In a perfect world you would design your apps from the very beginning to operate in the cloud to offer the best experience possible. Unfortunately, not every company has that luxury, and many often deal with an “accidental cloud”. But there’s a lot riding on getting it right: 61% of IT leaders said their companies have at least one application, or a portion of their computing infrastructure in the cloud, and the average investment in cloud-based services during the next 12 months will be $1.5 million. Are your users happy with the cloud experience you currently offer? In this Owner’s Manual white paper, IT pros share hard-earned insights from their own cloud deployments, and provide tips on how to improve the overall experience.
Category Archives: virtualization
Modern Infrastructure: Hyperscale data center means different hardware needs
Remember when data centers had separate racks, staffs and management tools for servers, storage, routers and other networking infrastructure? Those days seem like a fond memory in today’s hyperscale data center. That setup worked well when applications were relatively separate or they made use of local server resources such as RAM and disk and had few reasons to connect to the Internet.
I describe the new needs of the modern hyperscale data center in an article for Modern Infrastructure Magazine here.
Boundary.com: Some words of wisdom on networking technologies
The folks at Boundary.com, a networking vendor, have posted this interview of some thoughts of mine on the future of networking technologies, the cloud, and Web-based software.
ArsTechnica: What lies ahead in the world of networking
Tomorrow’s data center is going to look very different from today’s. Processors, systems, and storage are getting better integrated, more virtualized, and more capable at making use of greater networking and Internet bandwidth. At the heart of these changes are major advances in networking. In my story for ArsTechnica, I examine six specific trends driving the evolution of the next-generation data center and discuss what both IT insiders and end-user departments outside of IT need to do to prepare for these changes.
My fears about my own cloud migration
I try to eat my own dog food, as the saying goes. Nonetheless, I found myself going through all the various steps in Maslow’s hierarchy of needs when my mailing list hosting provider sent me an email last week telling me that they were moving my server to the cloud.
Funny, isn’t it, when it happens to you? Exactly my thoughts. For the past several years I have been using the North Carolina-based ISP EMWD.com and their very reasonably priced Mailman list services to distribute this newsletter. I am very happy with EMWD: they are very service-oriented, the fee is low, and as I am very familiar with Mailman, there is nothing for me to learn. And over the last several years, I have given them referrals from people who have wanted to start their own mailing lists, and these friends are happy as well with their service.
Mailman isn’t as pretty as Constant Contact or Mailchimp or other Web-based emailers: it is just for sending out text-based emails to a bunch of people. If you want HTML hotlinks or embedded graphics, these two are probably better services for running your list.
So anyway, last week I got an email saying my provider is going to the cloud. My first thought was unprintable. My next thought was what was I going to do? Was it going to be secure? Would I have to spend a lot of time debugging things? What did this really mean for me?
Then it hit me: I was acting like a customer who had never used the cloud before. Stop it! After all, what difference did it really make to me whether my server was sitting in EMWD’s data center or somewhere else? All that mattered was an IP address, that the server was running, and that it worked the same. Calm down, Strom.
But that is exactly the issue for many of your own customers, who may not have as much knowledge or understanding of what is involved. And these days it is getting harder to tell what is in the cloud and what isn’t, as new products blur the line even more so.
My hosting company was moving to the cloud for all the usual reasons: quicker provisioning, lower costs, more flexibility and scalability. Now, I am not a very demanding customer of theirs: all I use is their Mailman hosting, and that wasn’t changing.
So the migration day is today. I put a new IP address in my DNS, and a few hours later, all is well. At least I hope so. Everything looks the same from my end. And so much for my cloud migration story. But perhaps you can learn from this too, and understand that sometimes change isn’t all that big of a deal.
CloudSigma and its Flexible, Elastic and Transparent Public Cloud IaaS
Choosing from one of more than a dozen different Infrastructure-as-a-Service cloud providers (IaaS) can be tiresome. Pricing comparisons are difficult, figuring out features isn’t always obvious, and understanding their limitations can be vexing and require a great deal of time and research. But if you are looking for a capable cloud provider that lets you have a lot of flexibility, is transparent when it comes to cost calculations, and comes with ability to support many different virtual machine (VM) configurations, then you should consider CloudSigma’s solution.
I take a closer look at what CloudSigma offers in this white paper that is published here.
Modern Infrastructure: The promise of SDN
Software defined networks are seemingly everywhere these days, offering the promise of having a virtual network infrastructure that can be provisioned as easily as spinning up a new virtual server or storage network. But SDNs are also hard to find outside of a few marquee customers who have dedicated lots of operational resources to set them up and manage them.
In my story for Techtarget’s Modern Infrastructure ezine, I look at the history of SDN, where things stand today, some of the bigger obstacles and how you can begin to plan for them in your own data center.
GigaOm: A progress report on OpenStack
Despite being vilified by Gartner analyst Lydia Leong, in the past few months OpenStack has consolidated its lead and seems to be heading in the right direction. It is coming off a high from a summit conference in mid-October in San Diego along with a series of big customer wins and deployments. The fall season has been more of a coming out party for the open source cloud management solution, with more maturing services announced from partners.
Anyone contemplating a move to OpenStack should carefully consider some of the alternatives, too. But let’s see where things are looking up for OpenStack.
First, Amazon isn’t standing still, but OpenStack is gaining in terms of ease of deployment, particularly for private and hybrid cloud deployments. There are some key deployment partners that have seen lots of interest, including PistonCloud.com, CloudScaling.com and Mirantis. In August, PistonCloud announced a free version that, while lacking a few enterprise features, can be set up in a matter of minutes and can run from a USB stick. This makes it akin to another operating system, albeit one that is even easier to install than some desktop version of Windows or Linux.
And Rackspace has their free version of OpenStack for a private cloud, called Alamo, which also became available this summer.
One of AWS’s early leads was in developing Elastic Block Storage, to make it easier to move large amounts of data around such as massive scale-ups of virtual machines, or when apps have to directly write to physical storage drives. OpenStack has had this feature as part of its Quantum project, as you can see from the below chart comparing services from the two, along with Rackspace’s own offerings. At the summit in San Diego, SolidFire and Canonical joined forces to deliver a production-ready deployment of OpenStack that supports block storage, still missing from the Rackspace Cloud offering.
Rackspace, being one of OpenStack’s founders, has been the primary place to make use of public cloud offerings supporting OpenStack. At the San Diego summit, it made a series of announcements, including a software development kit for PHP and Java along with sample code to get started more easily. (OpenStack has been largely Python-based to date.) It also described building its own internal business intelligence application that took a traditional SQL query more than five days to run down to about three hours, using Cassandra and OpenStack. The app is now being used by several thousand Rackspace support personnel.
Also announced at the summit, DreamHost is launching several public cloud offerings, under the label of DreamCompute. And HP has had its own cloud service based on OpenStack for most of this year. (More on that in a moment.) Having stronger alternatives is key to OpenStack’s future adoption, but it’s also a challenge just to keep track of the various projects and who’s got what pieces included.
Of course, trying to make pricing comparisons between AWS and Rackspace’s cloud services isn’t easy. Rackspace includes round-the-clock telephone support, which is an extra charge with AWS, but even taking that into consideration and Rackspace Cloud Server still ends up costing at least half as much as AWS. One alternative might be HP’s cloud which seems to be the best bargain to date.
Second, while VMware joined the OpenStack Foundation largely because of its acquisition of Nicira, their support is critical to making a more pluralist and interoperable cloud solution possible. Nicira’s developers are active in the OpenStack community in building the networking pieces and will continue to be, which is a major plus for the foundation and for end users who want to build and manage hybrid clouds that use VMware tools, hypervisors, and third-party products. The virtual networking pieces still need work, and are essential if you are going to go big with OpenStack.
Third, HP’s cloud, which as we mentioned is built on top of OpenStack, is moving forward slower than it could or should, with many of its services still in beta or feature incomplete. According to employment site Dice they have 125 openings in their cloud services department. How fast they fill these openings will be instructive in seeing where they take their own cloud, and whether HP’s cloud offering remains competitive.
Fourth, the Foundation isn’t some small collection of nerds. It has a total of 5,600 members from over 200 companies, and more than 500 active engineering contributors to its code base of more than half a million lines of code. And it is a worldwide effort with members in more than 80 countries and more than 40 active user groups from every corner of the globe, including Antarctica.
As we mentioned in our piece in September, OpenStack is tracking along a Linux-type growth path. Moving towards turnkey versions such as PistonCloud is just one aspect; another is having a well-defined series of different bundles or packages to choose from. Rackspace claims that a quarter of the Fortune 100 companies have already downloaded its free Alamo version in the first few weeks after it was available in August. That is an impressive rate of interest, even if few of these downloads have resulted in any production deployments.
Finally, OpenStack isn’t the only alternative to AWS, although it certainly is the most organized and has the most vendors and users to date. Citrix’s CloudStack and Eucalyptus are two projects that are moving along and have their own supporters too.
Here are some key takeaways for those of you considering OpenStack:
- Look at the free versions from PistonCloud and Rackspace’s Alamo first and see if they can deliver value to the kind of cloud you are trying to build. Neither is 100% of what you are going to need, but either can get you up and running within a few hours on a couple of spare servers to experiment with it.
- Watch what Nicira is doing on the various OpenStack projects they are contributing to and new announcements from VMware in this space. Because virtual networking features have lagged behind the storage and compute elements, any progress in this front holds the keys to OpenStack’s success (or failure) if it will continue to scale up and capture larger installations.
- Look for third-party vendors who are using Amazon’s APIs along with VMware and others to manage multi-vendor clouds. For example, enStratus or RightScale both have management tools in this space, and there will be others over time. CloudStack has its CloudBridge that provides an AWS EC2-compatible API to make it easier to use AWS tools, for example, and Eucalyptus has tried to support as much of the AWS API set as they can.
- Look at some of the larger cloud deployments from Rackspace as leading examples of how scalable OpenStack can be. Webex and eBay’s X.Commerce are both examples of these, with hundreds of VMs deployed in their respective private clouds. But also look at the lower end too. One of Rackspace’s customers is Canarie, a Canadian service provider that built its cloud for small high-tech entrepreneurial companies who didn’t want to build their own clouds from scratch. Canarie now has a multi-region OpenStack cloud deployment that runs on top of a high-speed university network that spans one region in Quebec and another in Alberta. Each region currently contains about 20 hosts, each node supporting about 1.5 terabytes of storage per node, and the cloud is currently capable of handling up to 400 virtual machines. Having both large and small customers covered is a good example of its maturity.
- How locked in are you, really? While it is great to dive into an open source project, one thing to look for is how easily can you import or export your OpenStack-based VMs into or out of the current cloud provider? Rackspace doesn’t have any current mechanism for doing this easily. Of course, AWS has had import and export features for some time now, and can even convert VMware, Hyper-V and Xen images too.
Real World PaaS Testing by Infoworld
Infoworld published the results of real-world tests on seven different platform-as-a-service (PaaS) vendors. Of the seven, CloudBees and Cloud Foundry came out on top.
A PaaS is supposed to manage your app deployment, set up your virtual infrastructure and app servers, and allow you to provision the particular instances of VMs that you need to run your apps. Infoworld looked at Amazon, Google’s App Engine, Microsoft’s Azure, VMware’s Cloud Foundry, Salesforce’s Heroku, CloudBees, and Red Hat Open Shift. I found Infoworld’s review of particular interest because there are so few comparative reviews of PaaS products.
How Facebook Visualizes Cache Health
One of Facebook’s more infamous development initiatives is its use of Memcache to handle the large influx of data to keep performance up. And this week we are treated to what Facebook engineer Sean Lynch explains how he and his team came up with a monitoring tool called Claspin that uses heat maps to present the status of their systems in an easy-to-interpret format. The post can be found here.
Actually, Memcache is just one of two caching systems in use at Facebook. The other is TAO, a caching graph database that does its own queries to MySQL. Claspin is used to monitor both.
Lynch started thinking about ways to visualize all the relevant metrics that were easy to see trends, given that thousands of charts are involved and the racks upon racks of servers that are running to support Facebook’s applications and Web services.
The post describes the process that Lynch went through to get to the ultimate tool that is being used by the social networking operations team. The journey is instructive in understanding how you might want to develop your own monitoring tools and how to incorporate what Lynch calls “tribal knowledge” of his team into what metrics to use to measure a poorly performing host.
He settled on using two-dimensional heat maps that can pack a lot of visual information in a small space. “On a 30-inch screen we could easily fit 10,000 hosts at the same time, with 30 or more stats contributing to their color, updated in real time—usually in a matter of seconds or minutes,” he writes. Here is an example of the visualization, with each square representing a particular server, red being an issue and green meaning all systems are good.
As you can see in the screen capture, “mousing over a host draws an outline around its rack and pops up a tooltip with the hostname, rack number, and all the stats Claspin is looking at for that host, with the values colored based on Claspin’s thresholds for that stat,” he posted.
Perhaps the ultimate testimony of the power of his tool is in its utility. The tool’s name comes from the name of a protein that monitors for DNA damage in a cell. And like DNA, it has become very useful for other sorts of circumstances at Facebook. “It’s quite gratifying to walk around the campus and see Claspin up on the wall-mounted screens of teams I didn’t even know were using it,” he writes.