The changing role of IT

A year and a half ago, my director and I were having a discussion about my career plans and and the IT landscape in the coming years.  “IT is the next blue-collar industry,” he said.  I agreed, but didn’t give the matter much consideration.  Recent discussions on /. and Standalone Sysadmin have brought the subject back to the front of my mind.

There was a time when computer knowledge was a relatively rare, and thus valuable, asset.  When I was in middle school, I was one of the few people I knew who had AOL. Many had computers, but few were connected to any network.  In shop class, the teacher, a fellow student and I would share our experiences with the performance of the different access numbers.  I don’t claim to be anywhere near the cutting edge, but at the time, I was one in a relatively small club.

According to the Census Bureau, 62% of US households had Internet access in the home in 2007.  This number continues to rise, and it seems most middle-class families are online.  People my age and younger have grown up using computers.  Older people, including my parents, have acquired computer skills at home and/or at work.  As a result, computer knowledge, at least from the desktop perspective, has become commodity.  There’s a large population that can e-mail, browse the web, manage photos, add printers, etc. — at least to some degree.

Because more users have the ability to manage the routine tasks, the people paid to do these tasks lose stature.  Of course, not all IT staff do these tasks.  What we’ll see is the separation of IT from a monolithic entity into levels of expertise/responsibility.  Just like not everyone in the automotive industry is a mechanic, not everyone in IT is a technician either.

Help desk and other technician type positions will continue to become more blue collar, and I think that it is appropriate.  Systems and network admins, architects, and other higher-level positions are still, in my opinion, professional positions.  In environments where that’s not the case, it is up to these employees and their managers to make the case.

Filename extensions can cause problems

Most people don’t really give much thought to the idea of file extensions, although they’re nearly universally in the minds of modern computer users.  Users have come to understand that .pdf means a file is in the Portable Document Format, or that .ppt is a Microsoft PowerPoint file.  DOS users recall that files ending in .exe, .com, or .bat are executable. For those unknown extensions, there’s the very helpful filext.com website.  There’s no doubt that filename extensions can provide very helpful information, but here’s the issue: not all platforms care about them.  That’s not a problem in all cases, but there are times when it makes life miserable.

Filename extensions can be just another part of the filename, or they can be entirely separate namespace.  DOS first introduced the idea of extensions to the general public.  In those days, the file had a name of up to eight characters, and an extension of up to three.  This “8.3”  convention persisted into Windows, and is still commonly seen on Windows system files, even though it is no longer necessary.  Unix-based systems, such as Mac OS X and Linux, have no feelings about extensions — they’re certainly not required, but some applications make use of them.  The dominance of Windows in the desktop market has encouraged application writers to really care about extensions, and it does help in trying to find the right type of files.

Here’s where it becomes problematic.  Because some systems don’t care about extensions, it’s easy to not have extensions on your filename.  Then, when you go to a system that does care, things don’t work as you expected.  Here’s a fine example: my wife needed to have a few pictures printed, so she loaded them onto an SD card and took them to the store. When she got there, the photo system would not find any of the pictures.  As it turns out, she had saved them without the .jpg extension, so while they were valid JPEG files, the system didn’t try to load them.

Now, most photo software, cameras, etc. will add the extension out of tradition (and because that’s what people expect). However, a manual renaming of the files after the fact could result in absent extensions.  So what is the solution?  Well, we’ll never get all platforms to come to agreement on what filename extensions are, and how they should be defined and treated.  The only answer, then, is that applications should be written to not focus on extensions, but on the contents of the file.  If applications used methods similar to the Unix file command to determine file type, then such problems could be avoided.

Happy holidays, now go away!

Seriously.  It’s Christmas Day. Why are you here? Even if you are adamantly opposed to Christmas, you can still spend this time with your family.  Even if you are adamantly opposed to your family, you can still spend this time giving charitable service to your fellow man.  Even if you are adamantly opposed to your fellow man, there’s gotta be an IHOP open somewhere.  Don’t try to pretend that you’re adamantly opposed to pancakes, it’s not possible.

So from all of me here at FunnelFisasco.com, have a happy Christmas, Hanukkah, Boxing Day, Kwanzaa, Atheist Children Get Presents Day, Festivus, or whatever solstice-related holiday you celebrate (if any).  Check back again on Monday and I’ll have actual content for you.

Secure file transfer from (potentially) anonymous users

There are many cases in which you want to share files with someone “on the outside.”  While e-mail attachments can be useful, when the file sizes start to get large, it becomes more and more problematic.  Of course, if you only want to share files out, there’s the simple method of setting up a web server.  But what if you want to receive files from others?  That’s when it gets more complicated.  One option is to find code for a file uploader that runs on a website, but that’s a bit heavy.  What if you just want to receive a few large (or many small) files from a friend.  Perhaps you’re providing an off-site backup, or you really want to see the 15,000 pictures they took on their recent trip to Conway, Arkansas.  Either way, there’s a simple way to set up a place for friends to drop off your files, without having to set up a separate account for them, and without having to use the insecure File Transfer Protocol (FTP).

The first step is to create a generic user account that the outsiders will use.  In this case, we assume that you don’t want this account to have shell access in order to run commands, so we’ll use /sbin/nologin as the shell.  We’ll call the user ‘rsyncft’ (for “rsync file transfer”, although you could use whatever you want) and put the home directory in /var (again, you can put it wherever it makes sense for you).    When it’s all put together. the command to add the user will look something like this:

useradd -m -c "rsync user" -N -d /var/rsyncft -s /sbin/nologin rsyncft

After you’ve added the user, the next step is to add yourself to the ‘rsyncft’ group you’ve created.  This step is optional, but it will make it easier for you to get files in and out of the directory.  (See the sidebar about group membership for more discussion on this topic). Next, you need to make sure you’ve got the lim_rsync script.  I got this from some colleagues and have shared it on a previous post, but here it is again:

#!/bin/bash

PATH="/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin"
TERM='vt100'

# Make sure a command is specified (e.g. we're not trying to actually log in)
if [[ $SSH_ORIGINAL_COMMAND == "" ]]; then
 exit 1
fi

# Extract the first three bits of the command
TEST_COMMAND=`echo $SSH_ORIGINAL_COMMAND | awk -F" " '{ print $1 " " $2 }'`

# If rsync is called with "--server --sender", it means somebody is trying
# to remotely copy files from us.  Allow this.  Deny all others.
if [[ $TEST_COMMAND == "rsync --server" ]] || [[ $TEST_COMMAND == "/usr/local/bin/rsync --server" ]]; then
 exec $SSH_ORIGINAL_COMMAND
else
 exit 2
fi

The script should work on both Linux and Solaris.  I put it in /usr/local/bin, but you can put it anywhere you want, provided the rsyncft account can run it (but not write to it!)  The point of this script is to check to see if the connection is an rsync connection or not.  So how do we do this?  Well, the last step is to add your friend’s SSH public key (if you’re not sure what I’m talking about, see this post from the Standalone Sysadmin blog).  To do this, you’ll need to create the file ~rsyncft/.ssh/authorized_keys (you may need to create the .ssh directory, too) to include

command="/usr/local/bin/lim_rsync",no-pty,no-agent-forwarding,no-X11-forwarding,no-port-forwarding

Followed by the public key contents.  If you know where the connections will be coming from, you can further secure the key by including a from= option (e.g. from=”*.myemployer.com” or from=”192.168.1.10″).

That’s it for the setup, your friends can now copy files into your computer.  If you need to have multiple people connect, you can just add more keys to the authorized_keys file.  By playing with the directory permissions, you can also make sure that files can be read or only written, depending on what it is you’re trying to accomplish.  The people trying to connect to you will need to use SSH to make the rsync connection, like this:

rsync -e "ssh" file_to_copy rsyncft@your.computer.net:/var/rsyncft

Of course ‘your.computer.net’ should be replaced with the DNS name or IP address of your computer, and you’re free to use other options to rsync to modify the behavior.  Now what if you want potentially anonymous users to be able to give you files?  Perhaps you have some kind of scanner that runs on unmanaged machines in your department, and you need to collect the output.  In that case, you can just create an SSH key that has no passphrase and include the public key in rsyncft’s authorized_keys file and add the location of the private key to your ssh  command:

rsync -e "ssh -i /path/to/private_key" file_to_copy rsyncft@your.computer.net:/var/rsyncft

Sidebar: Being smart about group membership

When I wrote the first draft of this post, I suggested adding rsyncft to a group that you are a member of, for example “users”.  Fortunately, I realized before I published that this is a Very Bad Idea™.  If you were to do it this way, any group-writable files owned by this group could be modified.  This is a really terrible security risk.  A better alternative is to let useradd create a new group, and then add yourself to it.  Then only the files that you want to be available are.  A much heavier option would be to run this in a VM so that the outside user has no way to get to your files.  This is more secure, but overkill for most users, and a bit more difficult to setup (not to mention more of a system overhead).

Twitter made me feel important

I know it’s hard to imagine Twitter serving a useful purpose, but it did on Monday.  Not only did it make me feel useful, but I was able to pass along information.  Allow me to set the scene.  Purdue University announced last month the need for a $30 million cut in the budget to address a “structural deficit.”  Recently, the governor announced a cut of $150 million in higher education funding for the remainder of the biennium.  Because of cuts and RIFs earlier this year, there are a lot of questions among the faculty and staff about what these cuts might bring.  In order to address some of the concerns, President France Cordova held an open forum on Monday, joined by Provost Randy Woodson and Executive Vice President and Treasurer Al Diaz.

The South Ballroom in the Purdue Memorial Union was standing-room-only, and there were still many people who could not attend.  In order to keep my colleagues (and other followers who are interested in the state of Purdue’s finances), I live-tweeted during the 45-minute discussion.  As I expected, it was rather difficult to try to summarize important points in 140 characters or less while listening for the next useful piece of information.  What I didn’t expect was how fun it was to do it.  It was fun trying to meet the challenge, and I was encouraged by the fact that I got a few follow-up questions from followers (and amazingly, no one complained).  I’m not about to quit my job to become a full-time Twitter reporter, but I do hope I get a chance to do this again.

Book review: The Green Revolution

It came as a bit of a surprise that there’s an entire series of mystery novels set at the University of Notre Dame.  It came as a great surprise that these novels were written by a long-serving member of the Notre Dame faculty.  The Green Revolution is the 12th Notre Dame mystery novel written by Ralph McInerny, and one of over forty mystery novels he has printed.  As a loyal Boilermaker, I found the basis of this novel to be most pleasing.  The Green Revolution takes place during the 2007 football season, one in which Notre Dame did not have net positive rushing yards until the third game of the season.  As the season progresses, more and more Notre Dame fans begin calling for the ouster of the football coach, and some faculty move to end the football program entirely.

The apparent murder of one of the coach’s harshest critics is the purported theme of the book, but McInerny seems to spend a good portion of the novel discussing Notre Dame for Notre Dame’s sake.  Certainly there are some references that would only be understood by persons more familiar with the institution than I.  As a mystery novel, though, it works quite well.  The identity of the killer remained unknown to me until the very end, but looking back, it all made sense.  The writing style was enjoyable, even when the references were beyond me.  No doubt I will pick up another McInerny book the next time I’m  in the mood for a mystery.

The first few weeks with the N900, part 2

This is part 2 of my review of the N900.  Part 1 includes “Unboxing”, “The screen”, “Connectivity”, “Web browsing”, and “The camera and other multimedia goodness.”  Part 2 includes “E-mail, calendar, contacts, and instant messaging”, “Other applications”, and “The phone.” Continue reading

The first few weeks with the N900, part 1

Three months to the day after I first wrote about the N900, Nokia’s newest smartphone ended up on my desk.  Since I’ve talked so much about it on Twitter (and since I had to lobby my wife aggressively to let me buy it), I think I owe the world my review.  I get the feeling that this review will end up focusing on a lot of the negatives, but don’t misunderstand me: I really like this phone.  The N900 is great phone with a lot of potential, but it is currently an early-adopter’s phone.  I’m generally not one to play the early adopter game, but this time around I couldn’t help myself. Continue reading

Google DNS: A rare miss?

I’ve been a big fan of Google’s services for many years. GMail, Google Calendar, Google Talk, Google Voice, and Google Docs are all a regular part of my day.  (Admittedly, I haven’t quite figured out how I’ll use Google Wave, but I’m sure there’s a use for it somewhere.) So when I heard about Google offering a DNS service, I was very interested.  DNS (the Domain Name Service) is a vital part of the Internet.  It is what allows people to visit Funnel Fiasco without having to remember that the IP address is 72.52.153.36. Or to visit www.facebook.com without typing in 69.63.181.11.

A while ago, I switched from using my ISP’s DNS service to OpenDNS.  OpenDNS gives users the option to filter domains by content, which is a somewhat useful tool for parents and businesses.  Unfortunately, OpenDNS, like most ISP DNS services, returns a search page when a domain isn’t found. Sure, that might be handy for web browsing, but other services expect to be told a domain doesn’t exist when it doesn’t exist.  Google said that they’ll return appropriate responses for non-existent domains.

Before I made the switch, I decided to investigate which DNS service gave me the fastest responses.  I tested 8 DNS servers from 4 different services (Google, Comcast, OpenDNS, and Level3) at different times over the past few days.  The final result surprised me.  Google’s service was slower than both Level3 and OpenDNS, and slower than one of the two Comcast servers I tested.  Box plots are below, although it seems some of the calculation is off (for example, a DNS resolve time < 0 ms is not reasonable).

DNS resolve times for google.com

DNS resolve times for google.com

DNS resolve times for funnelfiasco.com

DNS resolve times for funnelfiasco.com

DNS resolve times for facebook.com

DNS resolve times for facebook.com

Average hostname resolve times in milliseconds

Google #1
(8.8.8.8)
Google #2
(8.8.4.4)
Level3 #1
(4.2.2.1)
Level3 #2
(4.2.2.2)
OpenDNS #1
(208.67.222.222)
OpenDNS #2
(208.67.222.220)
Comcast #1
(68.87.72.130)
Comcast #2
(68.87.77.130)
Google.com 51 42 25 25 25 26 24 62
Funnelfiasco.com 41 40 46 26 37 26 41 87
Facebook.com 39 37 26 25 25 29 25 63

So what’s the conclusion?  Well, it looks like the Level3 servers (4.2.2.1 and 4.2.2.2) are the fastest.  Tests by intMain.net support my own conclusions. Google’s DNS service might be faster for some people, but not for everyone.  If Google adds more servers, that might change.  In the meantime, it looks like I have some resolv.conf edits to make.

(P.S. Box plots created thanks to software from Vertex42.com)