Thoughts on the Functional Source License

Earlier this month, Sentry announced a new software license called the “Functional Source License.” The announcement has a tagline “Freedom without Free-riding”. It purports to address some of the issues with the Business Source License.

An improvement?

It does, in fact, address some issues with the Business Source License. The Business Source License is less of a license and more of a homework problem for a combinatorics class. It has many options that materially change the terms, so being told that software is under the Business Source License doesn’t tell you much. The Functional Source License has one choice: does the license change to the Apache Software License or the MIT license?

While the Functional Source License does reduce complexity, it still suffers from the same problem: it’s not open source. That’s not a problem in itself; it’s a problem because it tries to co-opt the well-known “open source” label without actually being open source. Ultimately, these are attempts to use a license to address a business model problem. As we all should know by now, open source is not a business model.

The “right” approach

I’ve seen some people ask “then what’s the right approach?” The answer depends on what it is you want to do.

If you’re trying to achieve ubiquity, make the source fully closed and give it away to people who aren’t going to pay you money anyway. Or go with a “freemium” or “open core” model.

If you want your users to see the code so that they can trust it, then just make it source-available. This doesn’t make a lot of sense in a software-as-a-service context (which is what the Functional Source License is geared toward) because the user has no way of knowing that the code they’re inspecting is that the code you’re running.

If you want to get community contributions, use an open source license. Otherwise, you’re the free rider. A copyleft license won’t prevent competitors from building a product on your software, but it does prevent them from keeping their changes to themselves.

On “non-code contributors”

I was dismayed when I read Justin Dorfman’s “Why I’m proud to be a non-code open source contributor and you should be too” this morning. It’s mostly a great article. Dorfman makes valid points about the value of contributing to open source projects beyond code. Open source needs — and should encourage — these kinds of contributions.

Which brings me to my issue with the article. Dorfman anonymously quotes a well-respected open source leader:

If you find yourself about to use the phrase “non-code contributors” you should stop and use entirely different language.

He calls that a “horrible idea” and suggests that it discourages the kinds of contributions we need. But this take is disengenuous at best. I happen to know the post he’s referring to and it continues:

Defining people by what they are not is not a valid pathway to inclusion. Want to attract designers? Say so. Want to attract technical writers or community managers? Say so.

Far from suggesting that people should be quiet about non-code contributions, the post is calling on project leaders to stop othering those contributors and explicitly value them. The author is saying that lumping everything that’s not code as “not code” diminishes it. It’s just a shame that the article misrepresents a post when it could just agree with what was actually written instead.

Does open source matter?

Matt Asay’s article “The Open Source Licensing War is Over” has been making the rounds this week, as text and subtext. While his position is certainly spicy, I don’t think it’s entirely wrong. “It’s not that open source doesn’t matter, but rather it has never mattered in the way some hoped or believed,” Asay writes. I think that’s true, and it’s our fault.

To the average person, and even to many developers, the freeness or openness of the software doesn’t matter. They want to be able to solve their problem in the easiest (and cheapest) way. Often that’s open source software. Sometimes it isn’t. But they’re not sitting there thinking about the societal impact of their software choices. They’re trying to get a job done.

Free and open source software (FOSS) advocates often tout the ethical benefits of FOSS. We talk about the “four essential freedoms“. And while those should matter to people, they often don’t. I’ve said before — and I still believe it — FOSS is not the end goal. Any time we end with “and thus: FOSS!”, we’re doing it wrong.

FOSS advocacy — and I suspect this is true of other advocacy efforts as well — tends to try to meet people where we want them to be. The problem, of course, is that people are not where we want them to be. They’re where they are. We have to meet them there, with language that resonates with them, addressing the problems they currently face instead of hypothetical future problems. This is all easier said than done, of course.

Open source licenses don’t matter — they’ve never mattered — except as an implementation detail for the goal we’re trying to achieve.

Ended, the clone wars have?

I have done my damnedest to avoid posting publicly about Red Hat’s decision to stop publishing RHEL srpms. For one, the Discourse around it has been largely stupid. I didn’t want any part of the mess. For another, I didn’t have anything particularly novel to add. I’m breaking my silence now because the dust seems to have settled in a very beneficial way that I haven’t seen widely discussed. (To be fair, since I’ve been trying to avoid the discussion, I probably just missed it.)

Full disclosure: as you may know, my role at Red Hat was eliminated earlier this year. This does not make me particularly inclined to give Red Hat as a company the benefit of the doubt, but I try to be fair. Also: during my time at Red Hat, I was the program manager for the creation of CentOS Stream. However, I did not make business decisions about it, nor did I have any say on the termination of CentOS Linux or the recent sprm change.

My take on the situation

I won’t get into the entire history or Red Hat Enterprise Linux, clones, or competitors here. Joe Brockmeier’s ongoing “Clone Wars” series covers the long-term history in detail. I do think it’s worth providing my take on the last few years, though, so you understand my take on the future.

First of all, I don’t think Red Hat (or IBM, if you’d rather) acted with evil intent. That doesn’t mean I think the decision was correct, but I do think it was a legitimate business choice. I disagree with the decision, but as much as they didn’t ask me before, they sure as hell don’t ask me now.

If RHEL development started out with a CentOS Stream model, I’m not sure CentOS Linux (and the other RHEL clones) would have existed in the first place. But we don’t live in that timeline, so RHEL clones exist.

There are plenty of valid reasons for wanting RHEL but not wanting to pay for the subscription. It’s not just that people are being cheap. Until 2018, users of Spot instances on Amazon Web Services couldn’t use RHEL. In a former role, we had RHEL customers who used CentOS Linux in AWS precisely because they wanted to use Spot instances. Others used CentOS Linux in AWS because they didn’t want to deal with subscription management for environments that might come and go. (I understand that subscription-manager is much easier to work with now.)

So while Red Hat may be right to say that RHEL clones don’t add value to Red Hat (and I disagree there, too), RHEL clones clearly add value for their users, which include Red Hat customers. It’s fair to say that, for some people, the perceived value of a RHEL subscription does not match what Red Hat charges for it. How to solve that mismatch is not a problem i’m concerned with.

So what now?

Two community-driven clones popped up in the immediate aftermath of the death of CentOS Linux: Rocky Linux and AlmaLinux. Both of these aimed to fill the role formerly held by CentOS Linux: a bug-for-bug clone of Red Hat Enterprise Linux. I never quite understood what differentiated them in practice.

But now duplicated effort becomes differentiated effort. Rocky Linux will continue to provide a bug-for-bug clone. AlmaLinux, meanwhile, will shift to making an ABI-compatible distribution — one where “software that runs on RHEL will run the same on AlmaLinux.” This differentiated effort allows those communities to serve different use cases. They now have their own niche to succeed or fail in.

Time will tell, but I think Alma’s approach is a better fit for most clone users. I suspect that most people don’t need bug-for-bug compatibility (except in the XKCD #1172 scenario). For many use cases, CentOS Stream is suitable. Of course, people make decisions based on what they think they need, not what they actually need. Third-party software vendors may end up being the deciding factor.

Given the different approaches Rocky and Alma are taking, I think Red Hat’s decision ended up being beneficial to the broader ecosystem. I don’t think it was done with that intent, and I am not arguing that the ends justify the means, but the practical result seems positive on the whole.

#inaction bcotton

On 25 June 2018, I published a post called “It’s hattening”. After years of rejected applications, I was finally starting a job at Red Hat. On 24 April 2023, Red Hat announced a 4% reduction in global staff. As a member of that 4%, today is my last day at Red Hat.

What does this mean for Ben?

This is the first time I’ve been laid off from a job. I hope it will be the last, but who can say? I’d be lying if I said I haven’t felt a big range of emotions in the past three weeks: confusion, anger, sadness, amusement.

But I’ve also felt loved. I’ve received so much support from people since the news started spreading. It’s like that end scene of “It’s a Wonderful Life” and I’m George Bailey. I’m proud of the contributions I’ve made to the Fedora community over the last five years, and it feels good to have others recognize that.

While I won’t be contributing as the Fedora Program Manager anymore, I was a Fedora contributor long before I joined Red Hat, and I’m not letting them take that away from me. I’ll still be around Fedora in ways that spark joy, although perhaps not much at first as I let my wounds heal.

I’ve had the great fortune to build an incredible professional and personal network over the years. I’m already pursuing a few opportunities and if those don’t pan out, I’ll be asking for your help finding more. In the meantime, I have (at least) a few weeks to relax for a bit. There’s a ton of work to do around the house, many trails to hike, Program Management for Open Source Projects to promote, and an embarrassingly-large backlog for Duck Alignment Academy articles.

What does this mean for Fedora?

I’ve told folks that if Fedora falls off the rails, then I have failed. I’m working with Matthew, Justin, and others to ensure coverage of the core job duties one way or another. I’ve worked hard over the years to automate tasks that can be automated. The documentation is far more comprehensive than what I inherited.

No doubt there are gaps in what I’ve left for my successors. However, my goal is that in a few months, nobody will notice that I’m gone. That’s my measure of success. The only reason I’ve been successful in my role is because of the work done by my predecessors: John, Robyn, Jaroslav, and Jan.

As to what the broader implication behind the loss of my position might be, I don’t know. There’s no indication that my role was targeted specifically. There are definitely people in Red Hat who continue to view Fedora as strategically important. I wish I had a clearer understanding of how they chose people/roles to cut, but I’ll probably never know the process. What I do know is that I fully intend to still be participating in the Fedora community when my account hits the 20-year mark in May 2029.

In defense of Fedora’s release cycle

Earlier this week, Thorsten Leemhuis published a thoughtful post about what he’d change if he magically became the supreme leader of Fedora. In that post and subsequent commentary on Mastodon and Fedora Discussion, he talked about changing Fedora’s release cycle. Since the Fedora Linux release process is my job, I figured I should explain why I disagree.

Integration projects are different

If you haven’t read the post, you should. But here’s the short version: Fedora Linux uses a release model rooted in the 1990s and should move to a “modern” model. Thorsten suggests a one-month cadence for those who want the latest versions and a one-year “steady” release. Such a model has worked well for Firefox, he argues, and so it should work for Fedora.

The key reason I think this is wrong is because Firefox is a development project whereas Fedora is an integration project. Integration projects don’t write a lot of code, they take the work of others and turn it into a coherent whole. This is a fundamentally different kind of work and it takes longer by necessity.

You can’t reliably integrate disparate pieces when they’re in constant motion. That’s why we have freezes leading up to the beta and final releases — they give the QA team time to test against a stationary target. It takes time to run through all of the test cases that make Fedora Linux a reliable operating system. So the choice becomes reducing the pre-release testing or spending a significant portion of the cycle in a freeze, which limits the the usefulness of the one-month cycle.

You can solve some of this with automated testing. And the QA does do a lot of automated testing. But those tests still take time, and there are a lot of interrelated parts in a Linux distribution.

Six months isn’t magic

There’s nothing objectively correct about a six month release cycle. It’s mostly because that’s how you get two releases a year. If the calendar had 10 months, the release cycle would be five. But there is a lower bound where you’ve become a de facto rolling release, even if you still have discrete releases. I don’t know where exactly that boundary is, but I suspect that one month is at or just beyond it.

Similarly, there’s an upper limit where you’re now a slow, plodding project. Again, I can’t say where the line is. Six months may be uncomfortably close to it, but I suspect it’s closer to a year. And, of course, it depends on the nature of the specific project.

So there’s no particular reason Fedora Linux couldn’t move to a shorter release cycle. Five months is totally doable. Four is possible. Three would require a tremendous amount of work before it could be considered. But what’s the benefit of going to a shorter cycle? Does five months instead of six make a meaningful difference? At least with six months, you know there’s a release targeted for April and October. Predictability is nice.

Solving the actual problem

The bigger issue, though, is that I don’t think people actually want this. Yes, you might want your web browser and other applications to update frequently. But that doesn’t mean you want your compiler or Python interpreter or C libraries to update frequently. Most people will avoid this in favor of the “steady” stream. This eliminates the intended benefit to upstream projects.

The people who do want everything to update quickly use a rolling release distribution, something that Thorsten explicitly says his proposal is not.

Fundamentally, the proposal looks at the problem the wrong way. The problem isn’t that a six month cycle is too long. The problem is that application delivery is coupled to operating system delivery. Most people want the latest versions of the applications they care about and for everything else to remain unchanged. The challenge, of course, is that not everyone draws that distinction in the same way.

We unsuccessfully tried to solve this with Modularity. Flatpak, at least for graphical applications, offers another attempt to solve this problem.

Historically, the system and application layers have been distributed together. Figuring out how to decouple these (including how to draw the line between them) is the interesting work. And it provides real value to the end users.

Open source is selfish: that’s good and bad

Back in May, Devin Prater wrote an excellent piece on Medium titled “Linux Accessibility: an unmaintained Mess“. Devin talks about the poor state of accessibility on mainstream Linux distributions. While blind people have certainly used Linux, it’s generally not an easy task. There’s a simple explanation for this: most open source contributors aren’t blind.

There’s no rule that you can’t make accessible software if you don’t need that particular accessibility feature. But for many open source contributors, their contributions are based on “scratching their own itch.” People work on the things that are personally interesting to them or impact them in some way.

That’s a good thing! It means they’re invested in how well the software works. I’m sure you’ve used some applications where you thought “there’s no way the people who made this actually used it.”

The problem comes when we’re excluding potential users and contributors. People with vision problems can’t contribute because they can’t easily use the software. And when they can use it, the tools for contributing add another barrier. I can’t imagine trying to understand a patch or an XML file read aloud, but there are people who have to do that.

In Program Management for Open Source Software, I wrote “software is only useful to the degree that people can use it”. I don’t have a great solution. As a community, we need to figure out how to keep the good part of the selfishness while being more inclusive.

SaaS makes the Linux desktop work

That take got fewer bites than I expected, so either it’s not very spicy or I need to repeat myself. But I want to give myself some room to expand on this idea.

To the average user, operating systems are boring. In fact, they’re mostly irrelevant. With the exception of some specialized applications (either professional or gaming), the vast majority of users could sit down at any computer and do what they need to do. Give them a web browser, and they can get to everything else.

For the purposes that matter, the Linux desktop has won. Except it’s not traditional distributions like Fedora or Debian. It’s Android and ChromeOS. And it’s not on desktop PCs. It’s on phones, tablets, and some laptops. If we meant something else when we spoke of “The Year of the Linux Desktop”, we should have been more specific.

That said, Linux desktops as Linux enthusiasts envisioned them are suitable for mainstream users. But it’s not because of native, locally-running apps; it’s because software-as-a-service (SaaS) makes the OS irrelevant.

This is not a cause for alarm. It’s actually an opportunity. It’s never been easier to move someone from Windows or macOS to Linux. You don’t have to give them a mapping of all of their old apps, you just say “here’s your browser. Have fun!” That’s not to say that the ecosystem lacks first-rate applications. Great FOSS applications exist for all OSes. But with SaaS, the barrier to changing the OS is dramatically reduced.

Of course, SaaS has problems, both technical and philosophical. We shouldn’t ignore those. The concerns have just moved up to another later. But we have the opportunity to move more people to Linux while we — as both FOSS communities and society in general —- address the concerns of SaaS. Or, perhaps more likely, move them up another layer.

Balancing advancement and legacy

Later today, I’ll submit a contentious Change proposal to the Fedora Engineering Steering Committee. Several contributors proposed deprecating support for legacy BIOS starting in Fedora Linux 37. The feedback on the mailing list thread and in social media is…let’s call it “mixed”.

The bulk of the objections distill down to: I have old hardware and it should still work. Indeed, when proprietary operating systems vendors (both in the PC and mobile spaces) embrace varying forms of planned obsolescence, open source operating systems can allow users to continue using the hardware they own. Why shouldn’t it continue to be supported?

Nothing comes for free. Maintaining legacy support requires work. Bugs need fixes. Existing code can hamper the addition of new features. Even in a community-driven project, time is not unlimited. It’s hard to ask people to keep supporting software that they’re no longer interested in.

I think some distros should strive to provide indefinite support for older hardware. I don’t think all distros need to. In particular, Fedora does not need to. That’s not what Fedora is. “First” is one of our Four Foundations for a reason. Other distros focus on long-term support and less on integrating the latest from upstreams. That’s good. We want different distros to focus on different benefits.

That’s not to say that we should abandon old hardware willy-nilly. It’s a balance between legacy support and advancing innovation. The balance isn’t always easy to find, but it’s there somewhere. There are always tradeoffs.

I don’t have a strong opinion on this specific case because I don’t know enough about it. We have to make this decision at some point. Is that now? Maybe, or maybe not.

Sidebar: it’s hard to know

One of the benefits of (most) open source operating systems also makes these kinds of decisions harder. We don’t collect detailed data about installations. This is a boon for user privacy, but it means we’re generally left guessing about the hardware that runs Fedora Linux. Some educated guesses can be made from the architecture of bug reports or from opt-in hardware surveys. But they’re not necessarily representative. So we’re largely left with hunches and anecdata.

Maybe we should think about how we use language ecosystems

Over the weekend, Bleeping Computer reported on thousands of packages breaking because the developer of a package inserted infinite loops. He did this with intent. The developer had grown frustrated with his volunteer labor being used by large corporations with no compensation. This brings up at least three issues that I see.

FOSS sustainability

How many times have we had to relearn this lesson? A key package somewhere in the dependency chain relies entirely on volunteer or vastly-underfunded labor. The XKCD “Dependency” comic is only a year and a half old, but it represents a truth that we’ve known since at least the 2014 Heartbleed vulnerability. More recently, a series of log4j vulnerabilities made the holidays very unpleasant for folks tasked with remediation.

The log4j developers were volunteers, maintaining code that they didn’t particularly like but felt obligated to support. And they worked their butts off while receiving all manner of insults. That seemingly the entire world depended on their code was only known once it was a problem.

Many people are paid well to maintain software on behalf of their employer. But certainly not everyone. And companies are generally not investing the sustainability of the projects they rely on.

We depend on good behavior

The reason companies don’t invest in FOSS in proportion to the value they get from it is simple. They don’t have to. Open source licenses don’t (and can’t) require payment. And I don’t think they should. But companies have to see open source software as something to invest in for the long-term success of their own business. When they don’t, it harms the whole ecosystem.

I’ve seen a lot of “well you chose a license that let them do that, so it’s your fault.” Yes and no. Just because people can build wildly profitable companies while underinvesting in the software they use doesn’t mean they should. I’m certainly sympathetic to the developers position here. Even the small, mostly unknown software that I’ve developed sometimes invokes a “ugh, why am I doing this for free?” from me—and no one is making money off it!

But we also depend on maintainers behaving. When they get frustrated, we expect they won’t take their ball and go home as in the left-pad case or insert malicious code as in this case. While the anger is understandable, a lot of other people got hurt in the process.

Blindly pulling from package repos is a bad idea

Speaking of lessons we’ve learned over and over again, it turns out that blindly pulling the latest version of a package from a repo is not a great idea. You never know what’s going to break, even if it’s accidental. This still seems to be a common mode in some language ecosystems and it baffles me. With the increasing interest in software supply chains, I wonder if we’ll start seeing that as an area where large companies suddenly decide to start paying attention.