Why Microsoft’s Hyper-V isn’t really gonna cut it

hyperv

This is the  console screen that you get when you try to run the bare-metal version called Hyper-V Server, the stripped-down version of Windows that allows you to run virtual instances on it, similar to VMware’s ESX. Notice the return to the days of DOS? And this after trying to find a NIC that has 64 bit drivers too and can be recognized by the OS.

I have a dream

It seems fitting to resurrect an old column that I wrote seven years ago in honor of Dr. King’s birthday and the coronation activities tomorrow. I made a few changes, but it still works today. 

Twenty-some years ago, the PC was invented and our desktops would never be the same. And now we must face the tragic fact that our desktops are still not free. Twenty years later, our lives are still sadly crippled by the manacles of frequent crashes and by numerous security problems. Twenty years we have lived on a lonely island of poverty in the midst of a vast ocean of material prosperity. We are still languishing in the corners of American society and find ourselves exiles in our own technological land.

So I have come here today to dramatize an appalling condition. Windows has to go from our desktops. It is time for the ‘nixes (Unix, Linux and Apple’s OS X) to play a more major role, and for Microsoft to get with the program and fix this broken buggy whip.

I say to you today, my readers, that in spite of the difficulties and frustrations of the moment, I still have a dream. It is a dream deeply rooted in the American dream.

I have a dream that one day this nation will rise up and live out the true meaning of productivity. I have a dream, that all PCs will live up to their original marketing potential, and free their owners from the evils of Vista and frequent application crashes. I have a dream that one day our desktop PCs, sweltering with the heat of their overclocked CPUs, will be transformed into an oasis of freedom and reliable operations.

I have a dream that one day all of my applications will be able to sit down together at a table of brotherhood and play nicely on my PC, no matter what version of drivers and odd video adapter is inside my computer.

I have a dream that your and my children will one day live in a nation where they will not be judged by the version of operating system running on their desktop computer, but by the content of their work output on their hard disk.

I have a dream today.

This is my hope. With this faith we will be able to work together, to pray together, to struggle together, to stand up for freedom together, knowing that we will be free one day from having to reboot our computers every day, from crashed applications and inexplicable blue screens and error messages.

How I wish most of us could free ourselves from the tyranny of Windows and have a desktop operating system that didn’t crash frequently, could support our legacy applications, were easy to install and wasn’t a security sinkhole. Dream on. When I wrote this back in 2002, we didn’t have XP, we didn’t have Vista, and we didn’t have Mac OS-X. Even with these new operating systems, we still have a very unstable OS, driver issues (still) with Vista, and more security issues by the week.

But a guy can dream, can’t he?

Back to the future with Windows NT

To start off the new year right, I decided to go back in time and see what I could learn from running an ancient (by computing standards, anyway) operating system and software combination. To appreciate how far we have come (or not), and to see whether I could actually get real work done. The idea came about from some conversations that Jason Perlow and I had. Jason is a fellow blogger and IT consultant who now works for IBM. He and I at one point in our lives (although not at the same time) lived in Port Washington, N.Y. and spent a lot of time with OS/2, but don’t let that influence you.

I picked NT v4 as my starting place. This operating system is more than ten years old, and was probably the last OS that Microsoft created that had some real simplicity to it. To get an idea of the power of the OS, there are still many corporate servers running it, even though Microsoft has tried to stamp it out and turn off support and push people to upgrade to something more recent. To get around the driver issues and other challenges, I decided to set up a virtual machine running NT, and I am using VMware’s Fusion on my Mac (just to make it interesting).

Jason and I have the hypothesis that the OS doesn’t really matter anymore, and that if you can get beyond some of the issues with running older software and applications, you may find that an older OS is perfect for your needs. We also thought that running an ancient OS was a good way to see how far we have come with newer computers, and perhaps a way to extract some additional performance because the older OSs are smaller and theoretically could run faster on the newer PCs.

To get NT working properly, you need find versions of software either online or in someone’s attic that are not so old as to be useless. First off, I had to install Service Pack 6, and I also needed to install the right version of the SP too for the encryption level of the OS. You then install the VMware tools software, which supplies the drivers to get the most out of your machine. Then you install Microsoft Office 2000 – which is the most recent version of Office that will run on NT. I messed up by installing the tools package after Office, and VMware didn’t like that. Office 2000 has the unfortunate side effect of updating your NT version with an almost-working version of Internet Explorer v5. The reason I say almost-working is that you need another piece of software called the Windows Installer to get other software installed on this machine. I couldn’t get past this point, however.

I also put on Firefox v2.0.0.20 browser on the machine, which is a fairly recent version of the browser, but apparently not recent enough as I had some problems with certain Web sites. I had to update my Adobe Flash plug-in too. Finally, I added AIM v5.9, which is an older version of Instant Messenger software. Skype doesn’t have any version that will run on NT, which is too bad.

So what I found was that the VM version of NT was pretty snappy. It would boot from scratch in under 30 seconds, and faster still from the suspended VM state. I liked the old-fashioned Windows and the lack of glitz and raw simplicity of the controls. No Aero Glass junk for this OS! Another plus with using VMs is that you don’t have to worry about personal firewalls and anti-virus as much – you can set up a protected environment and keep it isolated from your host machine, which is good because most of the AV programs have stopped supporting NT a long time ago.

All of my Office documents – some of which were created on Macs, some on Windows, came up just fine in Office 2000, which is because I am not using the 2007 version that introduced a new file format that isn’t compatible with the older versions. Shame on you Microsoft – and I know from hearing from some of you how vexing that version could be.

The other thing I noticed is how important the browser is to today’s computing world, and if you aren’t willing to stay current with your browser, you quickly get into trouble with many Web sites. The coming of IE v7 is a good case in point, and I know there will be a lot of grief to be had on both ends – the people that adopt the new browser and find sites that don’t work in it, and the sites that want to use its new features and piss off the people that aren’t upgrading yet.

I will have more to report on this experiment as I spend more time back in NT land. And those of you that want to try this on your own, email me privately and I will give you more specific tips.

Making Sense of Microsoft’s Azure (Infoworld)

Last week Microsoft announced its cloud computing effort called Azure. Fitting in between current offerings of Google’s and Amazon’s, it represents a very big step towards moving applications off the desktop and out of a corporation’s own datacenters. Whether or not it will have any traction with corporate IT developers remains to be seen. Think of Microsoft as bringing more of a Wild West feel to the whole arena of cloud computing.

How to distinguish the players? If we think back to the late 1880s, Amazon provides the land grants and raw Linux and Windows acreage to build your applications upon. Google’s General Goods Store will stock and give away all the APIs that a programmer could ever use. And some of the scrappy prospectors that come to build the new towns are from Microsoft. Ballmer as Billy the kid, anyone?

Enough of the metaphors. Mary Jo Foley’s excellent explanation of the different bits and pieces of Azure here is worth reading. But the first instance of Azure is long on vision and short on the actual implementation: Microsoft calls it a “community technology preview”, what the rest of us would call “alpha” version, given how long it takes them to actually get things nailed down and working properly (version 3 is usually where most of us start to think of their code as solid). Granted, Google calls many of its applications beta that are in much better shape – I mean, Gmail has been in beta about 17 years now.

What are some of the issues to consider before jumping on the Microsoft train? First, consider what your .Net skill set is and whether your programmers are going to be using Jscript or something else for the majority of their coding work. The good news is that Azure will work with .Net apps right from the start. (support for SOAP and REST and AJAX will be coming, they promise.)

Microsoft spoke about testing and debugging these apps on your local desktop, just as you do now, and then deploying them in their cloud. The bad news is that you probably have to re-write a good portion of these apps so that the user interface and data extraction logic can work across low-bandwidth connections.

CTO Ray Ozzie, in a cnet interview, talks about “fundamentally the application pattern does have to change. Most applications will not run that way out of the box [on Azure].”  While good programming practice today is to separate Web page content from formatting instructions, most programmers assume they are running everything on the same box. Remember how miserable LAN apps were back in the days of Token Ring? We have 10 gig Ethernet now, and people have gotten sloppy.

This is no small issue, and my prediction is that most apps will need some major surgery before they can be cloud-worthy. One wag already has placed his bets: Stephen Arnold writes in his blog Beyond Search, “I remain baffled about SharePoint search running from Azure. In my experience, performance is a challenge when SharePoint runs locally, has resources, and has been tuned to the content. A generic SharePoint running from the cloud seems like an invitation to speeds similar to my Hayes 9600 baud dial up modem.” For those of you that are too young to remember, that means verrrrry slow.

While you are boning up on .Net, you might also want to get more familiar with Windows Server and SQL Server 2008, because many of the same technologies will be used for Azure. One thing that won’t be in Azure is Hyper-V, apparently, we have another hypervisor to run the Azure VMs. Too bad, I was just getting comfortable with Hyper-V myself. Nobody said this was gonna be easy.

Speaking of servers, Microsoft is in the midst of a major land grab of its own, building out data centers in multiple cities and beefing up the ones it already has:

More good news is that they plan on using Azure to run their own hosted applications, and are in the middle of moving them over to the platform (so far, only Live Mesh is there today, here is an explanation of what this does for those that are interested). Right now, all Azure apps will run in its Quincy, Wash. data center, 150 miles east of Redmond, but you can bet this will change as more people use the service. At least Microsoft tells you this, Amazon treats it as a state secret how many and where its S3 and EC2 data centers are.

Of course, the big attraction for cloud computing is scalability, and Ozzie, in the same cnet interview, had this to say about it: “Every company right now runs their own Web site and they’re always afraid of what might happen if it becomes too popular. This gives kind of an overdraft protection for the people who run their Web sites.” I like that. But given the number of outages experienced by Amazon and Google over the past year, what happens when we have bank failures in Dodge?

Second, where do you want to homestead your apps: just because you want to make use of the Microsoft services doesn’t mean your apps have to reside on their servers. If you are happy with all the Google Goods, stay with that. If you like someone else’s scripting language, tool set, likewise.

What Microsoft is trying to do is manage the entire cloud lifecycle development, similar to how they already manage local app development with Visual Studio and dot Net tools. Yes, Amazon will let you build whatever virtual machine you wish in your little piece of cloud real estate, but Microsoft will try to work out the provisioning and set up the various services ahead of time.

Next, the real trouble with all of this cloud computing is how any cloud-based app is going to play with your existing enterprise data structures that aren’t in nice SQL data bases, and may even be scattered around the landscape in a bunch of spreadsheets or text files. Snaplogic (and Microsoft’s own Popfly) has tools to mix and mash up the data, but figuring out the provenance of your data is also not taught in traditional CompSci classes and is largely unheard of around most IT shops, too. Do you need a DBA for your cloud assets? It is 10 pm, do you know what your Widgets are doing with your data?

Next, pricing isn’t set, although for the time being, Azure is free for all. If we look at what Amazon charges for the kind of real estate that Microsoft is offering (2000 VM machine hours, 50 GB of storage, and 600 GB/month of bandwidth), that works out to about $400 a month. Think about that for a moment: $400 a month can buy you a pretty high end dedicated Linux or Windows server from any of a number of hosting providers, and then you don’t have to worry about bandwidth and other charges. And there are many others, like Slicehost here in St. Louis, who can sell you a VM in their data center for a lot less, too.

However, Amazon’s S3 storage repository is amazingly cheap, and getting cheaper as of this week: in fact, they are now charging less per GB the more you store with them. Microsoft should set tiered, fixed monthly pricing and make the storage component free. I am thinking a basic account is free, and then $100 a month would be just about right for the next level. Look how Office Live Small Business sells its hosting services as an indication of what to expect.

Finally, take a moment to do a browser inventory as best you can. You’ll find that you are supporting way too many different versions from different vendors, and getting people to switch just because the IT sheriff says so is probably impossible. If you are going to enter the brave new world of cloud computing, this is yet another indication of the beginnings of where the Wild West will begin for you.

The new real-time researcher

So hopefully you got some sleep last night after you voted, for those of you that live in America. But apart from the obvious result, our election had some other radical changes in how we consume information, and I wanted to share some thoughts. It was a historic night for these reasons, too.

 

The Internets have transformed those of us that are information junkies in a new way, to be our own real-time researchers, trend spotters and fact checkers. A combination of better search analytics, new technologies such as Twitter and live feeds, and even the relatively innocuous Facebook “I’ve voted” counter have put some very powerful tools in the hands of ordinary citizens.

 

It also helps that we had three relative rookies for our election-night network news anchors: while their talents (and take-home pay) are considerable, many of us haven’t had the relationship with Katie, Brian, and Charlie that we once had with Tom, Peter and Dan — or even Walter, Chet and David. Part of this is the waning influence of network TV, part of it is the movement away from newspapers and newsmagazines. (US News and World Report going monthly? Who would have thought?)

 

But the real reason has to do with the fact that the technologies to enable our own exploration of the world have become easier and more powerful to use, and within our grasp, even if we don’t have exceptional search skills. Let’s examine each of the technologies that have contributed to this state of affairs.

 

First is the ability to keep on top of what people are Googling. In a post last month in the ReadWriteWeb.com, Marshall Kirkpatrick writes how listeners were doing their own fact checking by Googling certain terms during the VP debates. By looking at the aggregated searches, we see that many people learned exactly which article of the Constitution covers the powers of the Vice President and that Biden got it wrong.

 

From the Google stats, we also see that Tina Fey has become a political personality, and indeed even more popular among searchers than Mr. Biden himself. Having watched many of the SNL skits, I found myself getting confused over who was the real Palin, and indeed didn’t do well on the Chicago Tribune’s photo quiz to distinguish the two women.

 

Speaking of those SNL skits, how many of you first watched them online versus on your living room TV? What was curious for me was that I first went to YouTube to find the videos, only to realize that NBC.com was posting them for the next-day audience on their own site. About time they figured out. How long did it take you to realize that as well? Maybe the networks finally understand the word-of-mouth day-after effects and can capture some of those page views for themselves. Do we really need an HD TV picture to see these videos when the postage-stamp a 320×240 portion of a Web browser can be just as satisfying?

 

In past elections, I was online most of the night, looking at the major network news Web sites, tracking the exit polls and ballots. Last night, I still did this, but there were other sites, such as Mahalo.com, that aggregated historical information so I could put this into context, and see how voting patterns from previous elections have compared with this year’s. As deep as I wanted to dive, I could easily find it with a few mouse clicks. It made the broadcast blather from the major networks even more irrelevant to me. 

 

Having waited in line for about 70 minutes to vote yesterday morning, I was curious to see how many people voted early in the day by watching Facebook’s real-time vote counter, which passed a million votes early in the day and topped out at somewhere around 4 million by the time the polls closed in the West. Granted, this counter was more of a stunt than any analytical tool, but it gave me a very real indication that yes, we as Americans (or at least the Americans that are Facebook active users who are registered) are voting – to the rate of several hundred every second, all day long.

 

What about Twitter and other live feeds? We could actually follow “reports” from various self-styled correspondents and what they found during the day. The local St. Louis paper hired a few students to do just that and you could read their Tweets here.

 

One of the students, Ian Darnell, summed it up this way: “This is it. This is our time. This is how history has unfolded before us.” He could have said historicity, which was one of the hot search terms for yesterday.

More Gmail Contacts annoyances

So I have either 6556 or 9788 contacts in my Gmail contacts address book. Why the difference? It has to do with the way the contacts are displayed, I kid you not.

I have tried to get an answer from Google. They sent me to this blog post, which unfortunately doesn’t really explain what I am observing. Yes, the new UI will no longer populate your “suggested contacts” list with people that you reply to. 

Google updated its interface for Gmail substantially last year, and since then the new UI just doesn’t work for me: it takes forever to load, crashes my browser frequently, and also has a big bug that for the life of me I don’t understand why it isn’t fixed: if you have more than 20 or so groups of contacts (I have about 50, I use this feature a lot), you can only display the first 20 in the new UI. Google could easily fix this by adding a scroll bar to the groups listing when you want to add someone to a particular group. Right now, you can’t scroll down past the first 20 group names.

Here are screenshots showing you before (below) and after the new UI is turned on (at left), and the relevant numbers of contacts displayed.

I asked Google to explain this. Still waiting a response.

Seinfeld/Gates commercials: whassup with Microsoft

You can see them here. Did anyone else fail to get the joke? As of this writing, there are two in the apparently continuing and very painful series. My friend James Gaskin said that Microsoft has successfully duplicated the painful user experience on Vista over to the TV commercial medium. Given the high priced talent that was no doubt tapped for these episodes, we can truly say that Microsoft is now a master of this domain, not that there is anything wrong with that.

Giving thanks to Bill Gates

So His Billness is set to retire this summer, stepping down from that small software company outside of Seattle that he began at about the same time that I was starting my own humble career in IT. We both are about the same age (well, he is a bit younger) and have three kids (and they are a lot younger than mine). While I am not ready to retire (my own funds are shall we say a bit more modest), it is interesting to see how my own career has tracked Gates’. And I just wanted to say, thanks Bill. Thanks for making my career so interesting and exciting: if Microsoft (and others, I don’t want to just blame them) had made better products, I probably would have less to write about as a tech journalist and fewer support issues when I was on the front lines toiling in the Information Centers of yore.

Lately, I say thanks Bill for Vista: if you had stuck with XP, we would be bored writing about it by now and using it wouldn’t be as challenging. Vista has given us full employment for IT people for years to come as we track down those drivers, buy more RAM, and mess with Aero. And thanks for all the fun with Yahoo over the past couple of months, too. That has been very entertaining; even it is mostly watching Ballmer doing another one of his famous hyper-kinetic dances. He learned from the master, to be sure.

I wanted to especially thank Bill for publicly cursing me out for some of the op/ed pieces that I wrote for Network Computing: there was this scene in one of those posh Palm Springs hotels where I met him randomly in the lobby, and asked innocently what he thought of my articles. (I guess this is around 1991.) For what seemed like eternity but was just a few minutes, he proceeded to use most of George Carlin’s famous seven words and told me exactly how little he valued my ideas, writing style, publication, and I think ancestry and family background too (memory is a bit faint on these last couple of points). Why thank him? Well, it gave me my requisite story to tell people about my own Gates Encounter. There were other times where I interviewed him, back in those early days when he only had a couple of Wagged hall monitors nearby, and they were interesting, but not as good stories.

I also wanted to also thank Bill for killing off a bunch of products that we are all better off not having around us anymore: things like Microsoft Bob, OS/2, Netware, DOS, Windows ME, Lotus 1-2-3, Word Perfect, and Web TV. But not NT: they can’t seem to kill that sucker no matter how hard they try. And speaking of NT, thanks Bill for producing such an insecure OS that helped generate of one of my favorite PC Week cover stories back in the late 1980s: we wrote about how anyone could take over a server with a simple boot floppy and physical access to the machine. Ah, those were the days! Remember floppies? Thanks for making software so big they now only fit on DVDs! Forget about floppies! Too bad we can’t forget about Hotmail, Active X and MSN, they have generated lots of extra hours of support for me over the years, and all deserve to be retired now.

And how can you not appreciate all the work that Microsoft has done to introduce such great phrases into the IT lexicon, things like “fear, uncertainty and doubt,” which is what they say before they actually write one line of code, or “we are on a product death march” when they are close to releasing their first beta, or “our software is now code complete,” which is what they say when they are on their second beta, or “our software is now released to manufacturing,” which is what they say when they first take money from paying customers. Who could forget such phrases as “cut off Netscape’s air supply” during the monopoly trials of the 1990s: now Netscape is just a quivering mass of open source jello somewhere inside the Googleplex, and Microsoft is still a monopolist, but the world is supposedly better off.

Speaking of lawsuits and monopolies, if you are a lawyer, you probably have your own special series of thank yous to Bill. Microsoft has been great at feeding you over the years, to the tune of some $9 billion. At one time, the company had 130 different active suits underway, with companies such as AT&T, IBM, the state of Montana, and Sun. Indeed, Sun has its own special thanks, it got a bunch of cash from Microsoft for its troubles, and all those times that Scott McNealy called Windows a hairball of an operating system and used Microsoft’s foibles to amuse his audiences, too.

So let’s all thank Bill on all his years of service and congratulate him on his upcoming retirement. He has served us all well and made our industry entertaining, fun, and even profitable for some. This column is taken from a series of (hopefully humorous) keynote speeches that I will be doing this month as my own personal tribute. If you want to hire me to continue the celebration and come speak at your organization, let me know.

4 GB: The next RAM barrier

Back in the early days of DOS, we had a memory barrier of 640 kB of memory. I know, it seems quaint now, something that you can find on the chipsets of audio greeting cards rather than real computers, but we spent a lot of time juggling applications to fit in that space. We had special hardware cards that could address more memory, and swapping programs (remember Quarterdeck?) that could allow us to run bigger apps. (And for those of us that are really old, we even remember the 64 kB barrier of the earliest Apple // computers!)

Now we are approaching another memory barrier, only this time it is 4 GB. That is the biggest amount of memory that 32-bit processors can access. It is a problem particularly for servers and has this eerie sense of déjà vu all over again for me.

Four gigs seemed like a lot of memory just a few years ago. We didn’t really need to worry, and our desktop operating systems seemed comfortable inside it. Then Microsoft got greedy with Vista, RAM got much cheaper and apps got bigger. Before you knew it, we are once again running out of headroom.

What is driving these bigger applications is the popularity of both virtualization and database servers. Virtualization is especially memory-intensive. If you want to take advantage of this technology, you have to bulk up your machine with lots of memory and disk. And the more RAM you throw at database servers, the happier they are.

Another big consumer of RAM is the video card and how it interacts with system memory. Some of them share their memory space with the PC, which means when you are running graphics-intense operations, you take away some of that RAM from all your applications. Again, we’ve heard this tune before. And most of us haven’t really paid much attention to the video card in our servers, because we didn’t think they needed much horsepower there. After all, we weren’t planning on running GTA4 on our servers, right?

There are solutions: run the 64-bit versions of Windows, or Linux, or even the Mac OS, which can address memory beyond 4 gigs quite nicely. This is nothing new on the Mac or Linux side, which has had 64-bit OSs for many years. Indeed, if you go back into the early 1990s, we had DEC Alphas and Silicon Graphics’ Irix and all sorts of workstations that were 64-bit processors and 64-bit OSs. Some apps are now only available in 64-bit versions, such as Microsoft Exchange 2007. Others, like Oracle 11g, are still available for both 32- and 64-bit versions.

The problem is with Windows, and particularly finding the right 64-bit drivers for these machines. Rewriting drivers isn’t sexy stuff, and generally the province of some very talented coders that are dedicated enough to stick to the project. One engineering manager I spoke to told me it took his team six months to rewrite his driver set, and it wasn’t a fun six months at that. “Microsoft’s driver signing requirements are intense, he told me. “And at the time we were engaged with them, they were adding and changing tests during the process without informing us, which increased the dev cycles and cost.”

This driver issue is tricky, because you don’t usually think about all of them that you need to upgrade when you are looking at your server portfolio, and generally you don’t know what you need until you install a test machine and see what isn’t supported. Then the fun begins.

So take some time to plan out your strategy if you are running out of RAM. Take a closer look at the new Windows Server 2008 64-bit version, and whether it will run on your existing hardware. And while you are at it, look at Apple’s X Serve too: it might be a lower-cost alternative to running all those virtual machines on a true 64-bit platform.

(this appeared in Baseline Magazine this week)