The AWS/VMWare partnership

Disclosures: My employer is an AWS partner. This post is solely my personal opinion and does not represent the opinion of my employer or AWS. I have no knowledge of this partnership beyond what has been publicly announced. I also own a small number of shares of Amazon stock.

Last week, Amazon Web Services (AWS) and VMWare announced a partnership that would make AWS the preferred cloud solution for VMWare. AWS will provide a separate set of hardware running VMWare’s software managed by VMWare staff. Customers can then provision a VMWare environment from that pool that looks the same as an internal data center.

As others have pointed out, this is essentially a colocation service that just happens to be run by Amazon. I share that view of it, but I don’t take the view that AWS blinked. It’s true that AWS has eschewed hybrid cloud in favor of pure cloud offerings, and they’ve done quite well with that strategy.

I don’t think the market particularly cares about purity, nor do I think the message will get muddled. Here’s how I see this deal: VMWare sees people moving stuff to the cloud and they know that the more that trend continues, the smaller their market becomes. Meanwhile AWS is printing money but is aware of the opportunity to print more. Microsoft Azure, despite having an easy answer for hybrid, doesn’t seem to be a real threat to AWS at the moment.

But I don’t think AWS leadership is stupid or complacent, and this deal represents a low-risk, high-reward opportunity for them. With this partnership, AWS now has an entry into organizations that have previously been cloud-averse. Organizations can dip their toes into “cloud” without having to re-tool (although this is not the best long-term strategy, as @cloud_opinion points out). As the organization becomes comfortable with the version of the cloud they’re using, it becomes easier for AWS sales reps to talk them into moving various parts to AWS proper.

Now I don’t mean to imply that AWS is a sheep in wolf’s clothing here. This deal seems mutually beneficial. VMWare is going to face a shrinking market over time. With this deal, they at least get to buy themselves some time. For AWS, it’s more of a long game, and they can put as much or as little into this partnership as they want. For both companies, it’s a good argument to prevent customers from switching to Microsoft’s offerings.

What will be most interesting is to see if Google Cloud, the other major infrastructure-as-a-service (IaaS) provider will respond. Google’s strategy, up until about a year ago, has seemed to be “we’re Google, of course people will use us”. That has worked fairly well for startups, but it has very little traction in the enterprise. Google can continue to be more technically-focused, but that will hinder their ability to get into major corporations (especially those outside of the tech industry).

I don’t see that there’s a natural fit at this point (though I also wouldn’t have expected AWS and VMWare to pair up, so what do I know?). One interesting option would be for Google to buy Red Hat (disclosure: I also own a few shares of Red Hat) and make Open Shift its hybrid solution. I don’t see that happening, though, as it doesn’t seem like the right move for either company.

The VMWare-on-AWS offering will not be generally available until sometime next year, so we have a little bit of time before we can see how it plays out.

The strangest bug

Okay, this is probably not the strangest bug that ever existed, but it’s certainly one of the weirdest I’ve ever personally come across. A few weeks ago, a vulnerability in OS X was announced that affected all versions but was only fixed in Yosemite. That was enough to finally get me to upgrade from Mavericks on my work laptop. I discovered post-upgrade that the version of VMWare Fusion I had been running does not work on Yosemite. Since VMWare didn’t offer a free upgrade path, I decided not to spend the company’s money and switched to VirtualBox instead (see sidebar 1).

Fast forward to the beginning of last week when I started working on the next version of my company’s Risk Analysis Pipeline product. One of the executables is a small script that polls CycleServer to count the number of jobs left in a particular submission and blocks the workflow until the count reaches 0. It’s been pretty reliable since I first wrote it a year ago, and hasn’t seen any substantial changes.

Indeed, it saw no changes at all when I picked up development again last week, but I started seeing some unusual behavior. The script would poll successfully six times and then fail every time afterward. After adding some better logging, I saw that it was failing with HTTP 401, which didn’t make sense because it sent the credentials every time (see sidebar 2). I checked the git log to confirm that the file hadn’t changed. I spent some time fruitlessly searching for the error. I threw various strands of spaghetti at the wall. All to no avail.

I knew it had to work generally, because it’s the sort of thing that would be very noticeable to our customers. Particularly the part where this sort of failure would mean the workflow never completed. I wondered if something changed when I switched from VMWare Fusion to VirtualBox. After all, I did change the networking setup a bit when I did that, but I would expect the failure to be consistent in that case. (Well, to always fail, not to work six times before failing.)

So I tried the patch release I had published a few days before. It worked fine, which ruled out my local test server being broken. Then I checked out the git tag of that patch release and recompiled. The rebuild failed in the same way. This was very perplexing, since I had released the patch version after the OS X upgrade and resulting VM infrastructure changes.

Out of ideas, one of my colleagues suggested reinstalling Python. I re-ran the Python installer and built again. Suddenly, it worked. I’m at a loss to explain why. Maybe there was something different enough about the virtualized network devices that caused py2exe to get confused when it built. Maybe there’s some sort of counter in urrlib2 that implements the plannedObsolescence() method. Whatever it was, I decided I don’t really care. I’m just glad it works again.

Sidebar 1

The conversion process was pretty simple. For reasons that I no longer remember, I had my VMWare disk images in 2 GB slices, so I had to combine them first. VirtualBox supports vmdk images, though, so it was quick to get the new VMs up and running. My CentOS VM worked with no effort. My Windows 7 VM was less happy. I ended up having to reinstall the OS in order for it to boot in anything other than recovery mode. It’s possible that I failed to correctly install something at that time, but the timeline doesn’t support that. In any case, I’m always impressed by the way my virtual and physical Linux machines seem to handle arbitrary hardware changes with no problem.

Sidebar 2

I also learned something about the way the HTTP interactions worked. I’ve never had much reason to pay attention before, but it turns out that the call to the rest API is first met with a 401, then it sends the authentication and gets a 200. This probably comes as no surprise to anyone who has dealt with HTTP authentication, but it was a lesson for me. Never stop learning.

Sidebar 3

I didn’t mention this in the text above, so if you made it this far, I applaud your dedication to reading the whole post. The first half of my time spent on this problem was spent ruling out a self-inflicted wound. I had already spent a fair amount of time tracking down a bug I introduced trying to de-lint one of the modules. More on that in a later (and hopefully shorter) post.

A quick summary of green-er computing

Last week a Twitter buddy posted a blog entry called “E-Waste not, Want not”.  In it, she raises some very good points about how the technology we consider “green” isn’t always.  She’s right, but fortunately things may not be as dire as it seems.  As computers and other electronic devices become more and more important to our economy, communication, and recreation, efforts are being made to reduce the impact of these devices.  For the devices themselves, the familiar rules apply: reduce, reuse, recycle.

Reduce

The first way that reduction is being accomplished is the improved efficiency of the components.  As processors become more powerful, they’re also becoming more efficient.  In some cases, the total electrical consumption still rises, but much more slowly than it would otherwise.  In addition, research and improvements in manufacturing technology are getting more out of the same space.  Whereas a each compute core was on a separate chip, nowadays it’s not unusual to have several cores on a single processor the same size as the old single-core models.  Memory and hard drives have increased their density dramatically, too.  In the space of about 10 years, we’ve gone from “I’ll never be able to fill a 20 GB hard drive” to 20 GB is so small that few companies sell them anymore.

As the demand for computing increases, it might seem unreasonable to expect any reduction in the number of computers.  However, some organizations are doing just that.  Earlier this year, I replaced two eight-year-old computers I had been using with a single new computer that had more power than the two old ones combined.  That might not be very impressive, but consider the case of Solvay Pharmaceuticals: by using VMWare‘s virtualization software, they were able to consolidate their servers by a 10:1 ratio, resulting in a $67,000 annual savings in power and cooling costs.  Virtualization involves running one or more independent computers on the same hardware.  This means, for example, that I can test software builds on several Linux variants and two versions of Windows without having to use separate physical hardware for each variation.

Thin clients are a related reduction.  In the old days of computing, most of the work was done on large central machines and users would connect via dumb terminals: basically a keyboard and monitor.  In the late 80’s and 90’s, the paradigm shifted toward more powerful, independent desktops.  Now the shift is reversing itself in some cases.  Many organizations are beginning to use thin clients powered by a powerful central server.  The thin client contains just enough power to boot up and connect to the server.  While this isn’t useful in all cases, for general office work it is often quite suitable.  For example, my doctor has a thin client in each exam room instead of a full desktop computer.  Thin clients provide reduction by extending the replacement cycle.  While a desktop might need to be replaced every 3-4 years to keep an acceptable level of performance, thin clients can go 5-10 years or more because they don’t require local compute power.

Another way that the impact of computing is being reduced is by the use of software to increase the utilization of existing resources.  This particular subject is near and dear to me, since I spend so much of my work life on this very issue.  One under-utilized resource that can be scavenged is disk space.  Apache’s Hadoop software includes the ability to pool disk space on a collection of machines into a high-throughput file system.  For some applications, this can remove the need to purchase a dedicated file server.

In addition to disk space, compute power can be scavenged as well.  Perhaps the most widely known is BOINC, which was created to drive the SETI@Home project that was a very popular screen saver around the turn of the millennium.  BOINC allows members of the general public to contribute their “extra” cycles to actual scientific research.  Internally, both academic and financial institutions make heavy use of software products like Condor to scavenge cycles.  At Purdue University, over 22 million hours of compute time were harvested from the unused time on the research clusters in 2009 alone.  By making use of these otherwise wasted compute hours, people are getting more work done without having to purchase extra equipment.

Reuse

There’s such a wide range of what computers can be used for, and that’s a great thing when it comes to reusing.  Computers that have become too low-powered to use as a desktops can find new life as file or web servers, networking gear, or as teaching computers.  Cell phones, of course, seem to be replaced all the time (my younger cousins burn out the keyboards really quickly).  Fortunately, there’s a good market for used cell phones, and there are always domestic violence shelters and the like that will take donations of old cell phones.

Recycle

Of course, at some point all electronics reach the end of their useful lives.  At that point, it’s time to recycle them.  Fortunately, recycling in general is a common service provided by sanitation services these days.  Some of those provide electronics recycling, as do many electronics stores.  Recycling of electronics (including batteries!) is especially important because the materials are often toxic, and often in short supply.  The U.S. Environmental Protection Agency has a website devoted to the recycling of electronic waste.

It’s not just the devices themselves that are a problem.  As I mentioned above, consolidating servers results in a large savings in power and cooling costs.  Keeping servers cool enough to continue operating is a very energy-intensive.  In cooler climates, outside air is sometimes brought in to reduce the need for large air conditioners.  ComputerWorld recently had an article about using methane from cow manure to power a datacenter.  This is old hat to the people of central Vermont.

It’s clear that the electronic world is not zero-impact.  However, it has some positive social impacts, and there’s a lot of work being done to reduce the environmental impact.  So while it may not be the height of nobility to include a note about not printing in your e-mail signature, it’s still better than having a stack of papers on your desk.

Which free virtual machine program to use?

For a while I’ve been debating whether I should buy a copy of VMWare Fusion for my Mac or to stick with the free version of VirtualBox.  For my needs, they compare nearly identically.  The deciding factor ended up being the KVM switch I use on my Linux and Windows machines.  Crazy, right?

For all platforms except Mac OS X, VMWare provides VMWare Server for free.  Server is a pretty solid VM platform for lightweight purposes.  Version 2 switched to a web-based interface which has advantages and disadvantages.  The main advantage is that it is very easy to connect to a VMWare server instance running on a different machine just by connecting to the address in a web browser.  The big problem I had with Server is that every time my mouse would leave the VM window, it would trigger my KVM switch (TrendNet TK-407K if you’re interested) to switch to the next computer.

Now the main reason I bought this particular switch was because it was very cheap.  It doesn’t have a whole lot of fancy features, it just lets me share a single set of interfaces across 4 machines, which is all I really need it to do.  The problem is, there doesn’t seem to be any way to turn off this automatic changing of machine.  Since I want to use my VM for actual work, having my keyboard mouse and monitor switch to a different computer every time I leave the VM is quite a hassle.  I found a few suggestions via Google, but none of them seemed to help.

After installing VirtualBox, I tried to get it to reproduce this problem.  It could not.  Since VirtualBox is free and available on Windows, Mac, and Linux, it really became an easy decision.  All thanks to a $60 KVM.

Hooray for vendors!

I’m too low on the food chain to get wooed by vendors very often, so when I saw that VMWare was coming to campus and that lunch would be provided, I jumped all over that.  Not only did I get lunch, but I also got breakfast, a USB mouse (with retractable cord!), and a notepad complete with biodegradable pen!  Well that’s great and all, but what’s even better is the knowledge they gave me.

I’ve used VMWare’s desktopy products: Player, Workstation, and Server.  Useful products, especially Server because it’s free.  I only have a few servers in my department, so I don’t have need for enterprise products.  A few test VMs to play with are all I really need — or so I thought.  After seeing the power of ESX, I’m starting to reconsider.  With VMWare ESX, I could combine pretty much everything onto two beefy servers and never have to worry about hardware problems.  If I have a hardware problem, I’ll just let ESX’s High Availability services take care of it.

Of course, it would be hard to justify the hardware expenses — I can’t even convince my boss to pony up the 2k to pay for my Solaris training.  If I could start from scratch, though, that would be the way to go.  Also, I like getting free stuff.