LISA ’11: the first half of the week

If you’ve been following me on Twitter, you know I’ve been in Boston for the USENIX Large Installation System Administration (LISA) Conference. Once again, I have the honor of serving on the conference blog team, which means I spend all day sitting in sessions and all evening writing about them. We’re halfway through now, so here’s what I’ve written so far:

You can follow along with the rest of the blog team at http://blogs.usenix.org

How I scheduled a meeting

Part of my responsibilities at work include wrangling our platoon of students. With most of them graduating at the end of this semester, I’ve preemptively hired many more to begin absorbing the knowledge necessary to keep a high performance computing shop running. The problem with students, though, is that they have classes to attend, which can make scheduling a bit of a bear. It gets worse as the number of students go up. Right now, I’ve got 14 separate schedules to balance .

I initially had them all register their availability using the free site whenisgood.net, but there were no times that worked for the whole group. Trying to figure out manually a pair of times that would get everyone to at least one meeting was challenging, but then I realized I could script it pretty easily. The hard part was turning each block on the calendar into either a 1 (available) or 0 (not available). Then it was simply a matter of trying every combination and rejecting the ones that don’t get everyone to at least one meeting.

The code below was saved as student_meeting.pl and invoked with a set of nested for loops like so:

for x in `seq 0 79`; do for y in `seq 0 79`; do perl student_meeting.pl $x $y 2>/dev/null; done; done

You may notice that each pair of available meetings would be printed twice. For example, 27 and 71 work, so 71 and 27 work as well and both get printed. The 0-79 represent the 80 half-hour time blocks from 9 AM to 5 PM Monday through Friday. The availability should be encoded similarly for each person inside the script. I include just mine in the example code so that you can see what it looks like. As it currently stands, the code is horrendous and not very robust. If there’s interest, I can clean it up some and put it on github. I’m not really sure if anyone else would care about it, but it might be a useful little project to someone else.

#!/usr/bin/perl

%availabilities = (
 'bcotton' => [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
 1,1,1,1,1,1,0,0,0,1,1,1,1,1,1,0,
 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
 1,1,1,1,1,1,0,0,0,1,0,0,0,0,1,0,
 1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1],
 # More people would also be here
);

@accounts = keys(%availabilities);

$meeting1 = $ARGV[0];
$meeting2 = $ARGV[1];
$meeting1attendees = '';
$meeting2attendees = '';

foreach $person ( @accounts) {
 $goodtimes = 0;
 if ( $availabilities{$person}[$meeting1] == 1 ) {
 $goodtimes++;
 $meeting1attendees .= " $person";
 }
 if ( $availabilities{$person}[$meeting2] == 1 ) {
 $goodtimes++;
 $meeting2attendees .= " $person";
 }
 unless ( $goodtimes > 0 ) { die; }
 $goodmeetings{$person} = $goodtimes;
}

print "###\nMeeting times $meeting1 $meeting2\n";
print "Meeting 1 $meeting1attendees\nMeeting 2 $meeting2attendees\n";

Building dial information for the radar page

I thought building the dial (adjacent radar sites) data for my mobile radar site would be a tedious and entirely painful process.  As it turns out, it really wasn’t that difficult.  I knew the data was out there in some form, if you visit any of the radar sites on the NWS website, you get a nice dial in the upper-left part of the screen, but I couldn’t find a good text file with that information.  Just when I was about to stat copying it by hand, I thought “maybe this is parseable”.

It turns out that the page is parseable, but it gets ugly at times.  To get a list of all the sites, I grabbed a file from Unisys.  I could extract the site, city, and state from there, so then all I needed was to grab the 8 (or fewer surrounding sites) and dump them all into a Perl hash.  So I wrote a bit of code to do just that.  It’s ugly, but it’s an example of what you can do when you really, really don’t want to do something by hand: Continue reading

Mobile radar page updated

After some effort, I’ve completed my to-do list for the mobile radar page and decided to call it version 1.0.  It is now available from the Mobile Weather page (http://weather.funnelfiasco.com/mobile/).  A mostly complete list of changes is below:

  • Bugfix: Fixed problem with selecting alternate products. An update in the previous version caused the site variable to not be set correctly when an alternate product was selected at the bottom of the page.  This has been fixed.
  • Added adjacent site dial. Toward the bottom of the display page, there is now a dial to select the same product from an adjacent site (if it exists).  This is really handy for times when the area of interest is right on the edge of a site’s coverage.
  • Images now have a file extension. Previously, images were saved without an extension.  This wasn’t really a problem unless you wanted to right-click on the image.  All images are now displayed with a .gif extension, even though some of the static images are actually PNG files.  This does not appear to have any adverse effects.
  • Site name now in the headline. The name of the site, as well as the ID is now given in the headline along with the product type.

NWS mobile radar page updated

I put a little bit of effort into the NWS mobile radar page this afternoon and am proud to announce that it has been bumped to version 0.2b.  It is now available from the Mobile Weather page (http://weather.funnelfiasco.com/mobile/).  A mostly complete list of changes is below:

  • Added site selection menu.  I didn’t think it was necessary initially, but several users have suggested it, and my own experience while on vacation proves that there probably aren’t too many people who know the ID for all 154 sites.  The added bonus is that I’ve begun support for a requested feature, which is to select adjacent radar sites.  The difficult part will be filling that information in for each site, so it will likely be a gradual roll-out.  Sites can still be selected manually, which is probably quicker if you already know the ID.
  • Added license information.  In line with the rest of the website, the code for this script is licensed under the CC-BY-NC-SA 3.0. That information is now contained in the comments as well in the code output.
  • Put site selection and product selection on separate lines.  This is a small tweak to (hopefully) improve usability.  While horizontal real estate is constrained, there’s more room to separate things vertically on most devices, so let’s go with that.  If nothing else, users are used to vertical scrolling.
  • Added spacing of other products under radar image. Another usability tweak, which should make the clicking process a little bit simpler, especially on touch-only devices.
  • Change radar image label from <p> to <h3>.  This makes the site and product a little more visible and adds some vertical spacing to keep things from looking too jammed together.

My TTYtter configuration

It’s been many months since I found out about TTYtter, a command line Twitter client written in Perl.  Though some users might bemoan the lack of a snazzy graphical interface, it is that very lack which appeals to me.  TTYtter places only a very tiny load on system resources, which means my Twitter addiction won’t get in the way of running VMs to test various configurations and procedures.  Being command-line based, I can run it in a screen session which means that I can resume my Twittering from wherever I happen to be and not have to re-configure my client.

I don’t claim to be a TTYtter expert, but I thought I’d share my own configuration for other newbs.  TTYtter looks in $HOME/.ttytterrc by default, and here’s my default configuration:

#Check to see if I'm running the current version
vcheck=1
# What hash tags do I care about?
track='#Purdue #OSMacTalk #MarioMarathon'
# Colors, etc are good!
ansi=1
# I'm dumb. Prompt me before a tweet posts
verify=1
# Use some readline magic
readline=1
# Check for mentions from people I don't follow
mentions=1

Of course, there are certain times that the default configuration isn’t what I want.  When I was reading tweets in rapid-fire succession during the Mario Marathon, I didn’t want non-Mario tweets to get in the way, so I used a separate configuration file:

# Don't log in and burn up my rate limit
anonymous=1
# Find tweets related to the marathon
track=#MarioMarathon "Mario Marathon"
# Don't show my normal timeline
notimeline=1
# Colors, etc are awesome!
ansi=1
# Only update when I say so. This keeps the tweet I'm in the middle of reading
#      from being scrolled right off my screen
synch

There are a lot of other ways that TTYtter can be used, and I’m sure @doctorlinguist will tell me all of the ways I’m doing things wrong, but if you’re in the market for a new, multi-platform Twitter client, you should give this one a try.

The joys of doing it right

A while back, I wrote a post about why it’s not always possible to DoItRight™, and that sometimes you just have to accept it.  Today I’m here to talk about a time that I did something right and how good it felt.  Now, that’s not to say that I’m eternally screwing up (although a good quarter of my Subversion commits are fixes of a commit I previously made), but there’s a difference between making something work and making it work well.

I decided that since we have a Nagios server, I might as well have it check on the health of our Condor services.  From what I could tell, no such checks currently exist, so I decided to write my own.  Nagios checks can be very simple: run a command or two, and then return a number that means something to Nagios.  Many checks are written in bash or another shell script because they are so simple.  For my checks, I wanted to do some parsing of the command outputs to determine the state of job queues, etc.  Since that kind of work is a little heavy for a shell script, I opted to write it in Perl.  Yay Perl!

Since there aren’t any checks available, I thought my work might be useful to others in the community.  As a result, I wanted to make sure my code was respectable.  This meant I spent some time designing, coding, and testing options that we don’t want but others might find useful.  It meant putting extra documentation into the code (and eventually writing some pod before I share the code publicly).  It meant mostly following the coding style of the Linux kernel (I chose that because “why not?”).

Some readers will (correctly) note that the Linux kernel coding style does not guarantee good code.  I don’t mean to suggest that it does, but I’ve found that it forced me to think about my code more deeply than I otherwise would.  Not being a programmer, most of the code I write is to fit a small need of mine and the quality is defined as “does it do what I want it to?”  Writing something with the intent of sharing it publicly and forcing yourself to not cut corners can make the work more difficult, but the end result is a beauty to behold.

Perl’s CGI.pm popup_menu cares how you give it data

Last weekend when I was working on the script that mirrors and presents radar data for mobile use, I decided the less work I had to do, the better.  To that end, I tried to make heavy use of the CGI.pm Perl module.  In addition to handling the CGI input, CGI.pm also prints regular HTML tags, so you can avoiding having to throw a bunch of HTML markup in your print statements.  This makes for much cleaner code and reduces the chances you’ll make a silly formatting mistake.

Everything was going well until I added the popup menu to select the radar product.  Initially I followed the example in the documentation and it worked.  As I went on, I decided instead of having two hashes for the product information, it made sense to make my hash include not only the product description, but the URL pattern I’d be using when it came time to mirror the image.  Unfortunately, when I tried to make that change, my popup form no longer had the labels I wanted.

I kept poking at it for a while and finally got frustrated to the point where I decided I’d just write a foreach and have that part print the HTML markup instead of using CGI.pm functions.  Fortunately, I first talked to my friend Mike about it.  I sent him the code and after a little bit of working, he realized what my problem was.  CGI.pm’s popup_menu function expects a pointer to a hash for labels, not an array (I’m not really sure why, maybe someone can explain it?).  Once that was settled, the script worked as expected and the remainder was finished in short order.

Sometimes, it really helps to pay attention to the data type that a function expects.

Tropical Storm Ida results

Well, the results are in for the TS Ida forecast contest.  I’m glad to say that yours truly finally won. Of course, there will be plenty of argument about the faults of the scoring equation.  You’ll get over it.  I don’t know who Dr. Free Beer is, but next time, try to get your forecast in the right hemisphere at least.  Which brings up a good point… I think I’ll edit the game code to have a field for e-mail address (it will be hidden from the public, but available to me so that I can contact players/verify edited forecasts).

Fortunately for interests along the Gulf of Mexico, Ida has been mostly nuisance.  This is not a bad way to end what has been another rather tepid hurricane season.  Ida went extratropical very shortly after making landfall (much to the chagrin of my friend Kevin).  I wonder if it set a record for quickest tropical to extratropical conversion.  Not that Ida was all that tropical at landfall.

In other news, thanks to Perl’s Math::Trig module, I can now trivially calculate the Great Circle distances, which has long been the sticking point.  At this point, all that remains to automate the scoring is some parsing and simple arithmetic.  That’ll make it easier to get results out quickly.  I haven’t yet decided if I should stop producing static results pages and let the CGI generate the results page on the fly, or if I should continue having separate, static pages for the results.  I might go with the former in order to conserve disk space.  I have no limit on cycles, so long as I don’t take down my provider’s server.  We shall see.  The first step is to actually write the code like I said I would two years ago.