Linux needs fat binaries

Last month, a /. story was posted discussing the idea of a “fat binary” for Linux. If you  know what I’m talking about, skip to the next paragraph.  You’re still here? Okay, here’s my quick, un-researched explanation.  A program starts out written in a coding language (for example, C) that is human-readable. It then gets compiled into a binary, which the computer understands. Different types of processors speak different languages so a binary for one type of processor might not work on another.  When Apple switched from PowerPC (PPC) processors to Intel, they introduced the “Universal binary”, which contained code compiled for both the PPC and Intel CPUs and the correct code was automatically loaded when the application was run.

There are several arguments against the idea of fat binaries.   Among these are that fat binaries are a waste of disk space and bandwidth, that it aids closed-source software, and that it is just plain unnecessary.  Like anything else, there are situations where fat binaries are not appropriate, and times where they are.  It is the latter that the FatELF project seems to be focusing on.

To me, the disk and bandwidth argument is the most compelling.  On most modern systems hard drive space is cheap and plentiful, but there are exceptions: mobile phones, netbooks, “classic” hardware.  The bigger disk concern, more than hard drive space, is the space on CDs and DVDs.  Because FatELF combines the binaries for each supported platform, the files grow in size rather quickly.  While this means the users no longer have to know what kind of processor they have in order to download the install media, it becomes a bigger task to do that.  For example, the Fedora Project supports three architectures, so in a worst case scenario, you’d need to download 18 CD images or two DVD images in order to have the full install media.  For people with a bandwidth cap, or no desire to wait that long and use that many discs, this isn’t a workable solution. A better solution might be to provide the architecture-specific options, but also include a fat version for people who don’t know which they need but don’t mind the larger download.  Of course, that requires extra space on the part of the project.

The more philosophical argument is that fat binaries help closed-source projects.  The foundation of the open source philosophy is that the end user should have freedom and control over their computer.  Some open source advocates consider closed-source software to be unacceptable and will have nothing to do with it.  I consider such fundamentalism to be counter-productive.  Like it or not, closed-source software is a fact of life in the desktop world, especially browser plugins and video drivers.  If these don’t work, the user will blame Linux, not the closed-source vendor, so it is important that things work for the user. Then the user base is larger and there’s more reason for vendors to work more closely with open source projects.

Besides, it’s not as if open source projects don’t get the same benefit.  Not every open source application gets distributed through a distribution’s official channel.  Some projects are so niche that they don’t have a wide audience, but their users would benefit from a simplified install.  Or perhaps an organization has a central application server and wants to make access the same for the users regardless of what machine they’re on. Fat binaries aren’t very useful to distributions outside of install-time, but they’re a great way to simplify the experience of the average user, and that’s a win.

Leave a Reply

Your email address will not be published. Required fields are marked *