Upgrading Windows in VirtualBox

For work, I have the occasional need to use Windows. I started out with a Windows XP virtual machine, and when that went end-of-life, I upgraded it to Windows 7. The “upgrade” process was really more of a reinstall-but-with-your-old data preserved. I had to reinstall many of the applications (including Python, though I didn’t realize it at the time).

I don’t pay much attention to the Windows ecosystem, but I had heard that a lot of unseen improvements took place in 7 and beyond which should make the upgrade process easier, so when it was time to upgrade because Windows 7 was (past, oops!) end-of-life, I wasn’t too worried.

Windows has a nice nag every time you log in saying “upgrade to Windows 10!” so I did. But it didn’t like the VirtualBox video driver. I tried reinstalling the Guest Additions, but that didn’t work. I tried uninstalling the driver, but Windows helpfully reinstalled it after a reboot. A forum post suggested I should enable 3D acceleration. Still no luck.

What eventually worked was to update to VirtualBox 5.0.12 (I had been on .10) and then install the latest Guest Additions. Instead of using the “upgrade me!” pestering, I had to download the Windows 10 media creation tool and use that to perform the upgrade.

Once I got those steps figured out, the upgrade process was pretty painless (if a little slow). Everything seemed to work fairly well, though I haven’t given it an in-depth trial yet. I do like the return to a more traditional UI, the tiles of Windows 8 work great on a phone, but I don’t like them on a desktop (and even worse on a server).

The strangest bug

Okay, this is probably not the strangest bug that ever existed, but it’s certainly one of the weirdest I’ve ever personally come across. A few weeks ago, a vulnerability in OS X was announced that affected all versions but was only fixed in Yosemite. That was enough to finally get me to upgrade from Mavericks on my work laptop. I discovered post-upgrade that the version of VMWare Fusion I had been running does not work on Yosemite. Since VMWare didn’t offer a free upgrade path, I decided not to spend the company’s money and switched to VirtualBox instead (see sidebar 1).

Fast forward to the beginning of last week when I started working on the next version of my company’s Risk Analysis Pipeline product. One of the executables is a small script that polls CycleServer to count the number of jobs left in a particular submission and blocks the workflow until the count reaches 0. It’s been pretty reliable since I first wrote it a year ago, and hasn’t seen any substantial changes.

Indeed, it saw no changes at all when I picked up development again last week, but I started seeing some unusual behavior. The script would poll successfully six times and then fail every time afterward. After adding some better logging, I saw that it was failing with HTTP 401, which didn’t make sense because it sent the credentials every time (see sidebar 2). I checked the git log to confirm that the file hadn’t changed. I spent some time fruitlessly searching for the error. I threw various strands of spaghetti at the wall. All to no avail.

I knew it had to work generally, because it’s the sort of thing that would be very noticeable to our customers. Particularly the part where this sort of failure would mean the workflow never completed. I wondered if something changed when I switched from VMWare Fusion to VirtualBox. After all, I did change the networking setup a bit when I did that, but I would expect the failure to be consistent in that case. (Well, to always fail, not to work six times before failing.)

So I tried the patch release I had published a few days before. It worked fine, which ruled out my local test server being broken. Then I checked out the git tag of that patch release and recompiled. The rebuild failed in the same way. This was very perplexing, since I had released the patch version after the OS X upgrade and resulting VM infrastructure changes.

Out of ideas, one of my colleagues suggested reinstalling Python. I re-ran the Python installer and built again. Suddenly, it worked. I’m at a loss to explain why. Maybe there was something different enough about the virtualized network devices that caused py2exe to get confused when it built. Maybe there’s some sort of counter in urrlib2 that implements the plannedObsolescence() method. Whatever it was, I decided I don’t really care. I’m just glad it works again.

Sidebar 1

The conversion process was pretty simple. For reasons that I no longer remember, I had my VMWare disk images in 2 GB slices, so I had to combine them first. VirtualBox supports vmdk images, though, so it was quick to get the new VMs up and running. My CentOS VM worked with no effort. My Windows 7 VM was less happy. I ended up having to reinstall the OS in order for it to boot in anything other than recovery mode. It’s possible that I failed to correctly install something at that time, but the timeline doesn’t support that. In any case, I’m always impressed by the way my virtual and physical Linux machines seem to handle arbitrary hardware changes with no problem.

Sidebar 2

I also learned something about the way the HTTP interactions worked. I’ve never had much reason to pay attention before, but it turns out that the call to the rest API is first met with a 401, then it sends the authentication and gets a 200. This probably comes as no surprise to anyone who has dealt with HTTP authentication, but it was a lesson for me. Never stop learning.

Sidebar 3

I didn’t mention this in the text above, so if you made it this far, I applaud your dedication to reading the whole post. The first half of my time spent on this problem was spent ruling out a self-inflicted wound. I had already spent a fair amount of time tracking down a bug I introduced trying to de-lint one of the modules. More on that in a later (and hopefully shorter) post.

Upgrading to Fedora 14 with yum

Fedora 14 was released two weeks ago.  I normally wait a day or two to install to let the mirrors cool down, but that put the target date right before I left for the LISA conference.  Like any good sysadmin, I’m sufficiently paranoid to not upgrade systems right before I leave, even if said system is only my own desktop.  So now that I’m back, I decided today was a good day to upgrade my home desktop.

As in the past, the recommended method was not for me — I opted to go with the Yum-based upgrade.  For Fedora 14, there’s a new feature that significantly reduces the amount of effort involved in live upgrades.  Using the –releasever argument and distro-sync command, it’s now possible to upgrade without having to manually install the updated version RPMs.  As Chris Siebenmann wrote, it can also be used to downgrade components.

I started the process doing what the instructions said, but as always it didn’t go quite perfectly.  After a little while, I noticed that yum was in an infinite loop of upgrades.

--> Running transaction check
--> Processing Dependency: texlive = 2007-51.fc13 for package: texlive-utils-2007-51.fc13.x86_64
--> Processing Dependency: texlive-dvips = 2007-51.fc13 for package: texlive-utils-2007-51.fc13.x86_6
---> Package xorg-x11-drv-nvidia.x86_64 1:260.19.12-1.fc13 set to be erased
---> Package xorg-x11-drv-nvidia-libs.x86_64 1:260.19.12-1.fc13 set to be erased
--> Processing Dependency: /usr/bin/dvips for package: openoffice.org-ooolatex-4.0.0-0.7.beta2.fc12.1.noarch
--> Processing Dependency: /usr/bin/texconfig-sys for package: linuxdoc-tools-0.9.66-5.fc13.x86_64
--> Finished Dependency Resolution

I noticed that it seemed to be related to either the nvidia drivers or the LaTeX package, so I opted to remove the drivers first:

yum remove xorg-x11-drv-nvidia

That made the loop a bit shorter, but it didn’t quite fix it, so I removed the LaTeX package:

yum remove `rpm -q --whatprovides /usr/bin/latex`

Then yum reported there were a few broken packages it couldn’t fix, so I removed them, too:

yum remove VirtualBox-3.1 perl-MythTV system-config-display python-MythTV

Finally, the upgrade was on its way.  When it finished and grub was installed, I rebooted into a nice, shiny Fedora 14 install.  (Note: to re-install VirtualBox, you’ll need to install the VirtualBox-3.2 package.)

A quick summary of green-er computing

Last week a Twitter buddy posted a blog entry called “E-Waste not, Want not”.  In it, she raises some very good points about how the technology we consider “green” isn’t always.  She’s right, but fortunately things may not be as dire as it seems.  As computers and other electronic devices become more and more important to our economy, communication, and recreation, efforts are being made to reduce the impact of these devices.  For the devices themselves, the familiar rules apply: reduce, reuse, recycle.

Reduce

The first way that reduction is being accomplished is the improved efficiency of the components.  As processors become more powerful, they’re also becoming more efficient.  In some cases, the total electrical consumption still rises, but much more slowly than it would otherwise.  In addition, research and improvements in manufacturing technology are getting more out of the same space.  Whereas a each compute core was on a separate chip, nowadays it’s not unusual to have several cores on a single processor the same size as the old single-core models.  Memory and hard drives have increased their density dramatically, too.  In the space of about 10 years, we’ve gone from “I’ll never be able to fill a 20 GB hard drive” to 20 GB is so small that few companies sell them anymore.

As the demand for computing increases, it might seem unreasonable to expect any reduction in the number of computers.  However, some organizations are doing just that.  Earlier this year, I replaced two eight-year-old computers I had been using with a single new computer that had more power than the two old ones combined.  That might not be very impressive, but consider the case of Solvay Pharmaceuticals: by using VMWare‘s virtualization software, they were able to consolidate their servers by a 10:1 ratio, resulting in a $67,000 annual savings in power and cooling costs.  Virtualization involves running one or more independent computers on the same hardware.  This means, for example, that I can test software builds on several Linux variants and two versions of Windows without having to use separate physical hardware for each variation.

Thin clients are a related reduction.  In the old days of computing, most of the work was done on large central machines and users would connect via dumb terminals: basically a keyboard and monitor.  In the late 80’s and 90’s, the paradigm shifted toward more powerful, independent desktops.  Now the shift is reversing itself in some cases.  Many organizations are beginning to use thin clients powered by a powerful central server.  The thin client contains just enough power to boot up and connect to the server.  While this isn’t useful in all cases, for general office work it is often quite suitable.  For example, my doctor has a thin client in each exam room instead of a full desktop computer.  Thin clients provide reduction by extending the replacement cycle.  While a desktop might need to be replaced every 3-4 years to keep an acceptable level of performance, thin clients can go 5-10 years or more because they don’t require local compute power.

Another way that the impact of computing is being reduced is by the use of software to increase the utilization of existing resources.  This particular subject is near and dear to me, since I spend so much of my work life on this very issue.  One under-utilized resource that can be scavenged is disk space.  Apache’s Hadoop software includes the ability to pool disk space on a collection of machines into a high-throughput file system.  For some applications, this can remove the need to purchase a dedicated file server.

In addition to disk space, compute power can be scavenged as well.  Perhaps the most widely known is BOINC, which was created to drive the SETI@Home project that was a very popular screen saver around the turn of the millennium.  BOINC allows members of the general public to contribute their “extra” cycles to actual scientific research.  Internally, both academic and financial institutions make heavy use of software products like Condor to scavenge cycles.  At Purdue University, over 22 million hours of compute time were harvested from the unused time on the research clusters in 2009 alone.  By making use of these otherwise wasted compute hours, people are getting more work done without having to purchase extra equipment.

Reuse

There’s such a wide range of what computers can be used for, and that’s a great thing when it comes to reusing.  Computers that have become too low-powered to use as a desktops can find new life as file or web servers, networking gear, or as teaching computers.  Cell phones, of course, seem to be replaced all the time (my younger cousins burn out the keyboards really quickly).  Fortunately, there’s a good market for used cell phones, and there are always domestic violence shelters and the like that will take donations of old cell phones.

Recycle

Of course, at some point all electronics reach the end of their useful lives.  At that point, it’s time to recycle them.  Fortunately, recycling in general is a common service provided by sanitation services these days.  Some of those provide electronics recycling, as do many electronics stores.  Recycling of electronics (including batteries!) is especially important because the materials are often toxic, and often in short supply.  The U.S. Environmental Protection Agency has a website devoted to the recycling of electronic waste.

It’s not just the devices themselves that are a problem.  As I mentioned above, consolidating servers results in a large savings in power and cooling costs.  Keeping servers cool enough to continue operating is a very energy-intensive.  In cooler climates, outside air is sometimes brought in to reduce the need for large air conditioners.  ComputerWorld recently had an article about using methane from cow manure to power a datacenter.  This is old hat to the people of central Vermont.

It’s clear that the electronic world is not zero-impact.  However, it has some positive social impacts, and there’s a lot of work being done to reduce the environmental impact.  So while it may not be the height of nobility to include a note about not printing in your e-mail signature, it’s still better than having a stack of papers on your desk.

Setting up a new Mac

As part of my new job, I got a shiny new 13″ MacBook Pro.  Even though I’m quite a Linux fanboy, I really enjoy the quality of the hardware and OS X. However, it isn’t perfect.  There are a lot of applications that I like to have available.  Since I have nothing better to talk about, I figured I’d list them here:

  • Adium — one of the best instant messenger clients I’ve ever used.  It has support for just about every major IM protocol except…
  • Skype — I don’t really use it for IM, but it’s great for audio and video calls.
  • Firefox — I prefer it to the Safari browser that ships with OS X.  It happens.  And with that comes…
  • Xmarks — a browser plug-in that syncs bookmarks.  It comes in very handy when you use multiple computers.  So does…
  • Dropbox — allows you to synchronize arbitrary files between multiple computers.  I mostly use it for configuration files (e.g. .bashrc, .screenrc)
  • VirtualBox — sometimes you actually need to use another OS to do some important task (like play Sim City)
  • DOSBox — is good for playing some of the older games that I like
  • Chicken of the VNC — I’ve played with several VNC clients for Mac, and this one is the best.
  • iTerm — hands-down better than the default Terminal.app
  • ZTerm — a program to make serial connections.  I used it a fair bit in my old job, I don’t anticipate needing it much in my new job.
  • Colloquy — an Internet Relay Chat client
  • VLC — a media player that will play just about anything
  • Grand Perspective — a program that shows a graphical representation of disk usage, allowing you to find the files that are chewing up all the space on your hard drive.

Which free virtual machine program to use?

For a while I’ve been debating whether I should buy a copy of VMWare Fusion for my Mac or to stick with the free version of VirtualBox.  For my needs, they compare nearly identically.  The deciding factor ended up being the KVM switch I use on my Linux and Windows machines.  Crazy, right?

For all platforms except Mac OS X, VMWare provides VMWare Server for free.  Server is a pretty solid VM platform for lightweight purposes.  Version 2 switched to a web-based interface which has advantages and disadvantages.  The main advantage is that it is very easy to connect to a VMWare server instance running on a different machine just by connecting to the address in a web browser.  The big problem I had with Server is that every time my mouse would leave the VM window, it would trigger my KVM switch (TrendNet TK-407K if you’re interested) to switch to the next computer.

Now the main reason I bought this particular switch was because it was very cheap.  It doesn’t have a whole lot of fancy features, it just lets me share a single set of interfaces across 4 machines, which is all I really need it to do.  The problem is, there doesn’t seem to be any way to turn off this automatic changing of machine.  Since I want to use my VM for actual work, having my keyboard mouse and monitor switch to a different computer every time I leave the VM is quite a hassle.  I found a few suggestions via Google, but none of them seemed to help.

After installing VirtualBox, I tried to get it to reproduce this problem.  It could not.  Since VirtualBox is free and available on Windows, Mac, and Linux, it really became an easy decision.  All thanks to a $60 KVM.