What does “open source” mean in 2021?

The licensing discourse in the last few weeks has highlighted a difference between what “open source” means and what we’re talking about when we use the term. Strictly speaking, open source software is software released under a license approved by the Open Source Initiative. In most practical usage, we’re talking about software developed in a particular way. When we talk about open source, we talk about the communities of users and developers, (generally) not the license. “Open source” has come to define an ethos that was all have our own definition of.

Continue reading

Releasing open source software is not immoral

Matt Stancliff recently made a bold statement on Twitter:

https://twitter.com/mattsta/status/1117794650742513664

He made this comment in the context of the small amount of money the largest tech companies use to fund open source. With the five largest companies contributing less than a percentage of their annual revenue, open source projects would have two billion dollars of support. These projects are already subsidizing the large corporations, he argues, so they deserve some of the rewards.

This continues the recent trend of people being surprised that people will take free things and not pay for them. Developers who choose to release software under an open source license do so with the knowledge that someone else may use their software to make boatloads of money. Downstream users are under no obligation to remunerate or support upstreams in any way.

That said, I happen to think it’s the right thing to do. I contributed to Fedora as a volunteer for years as a way to “pay back” the community that gave me a free operating system. At a previous company, we made heavy use of an open source job scheduler/resource manager. We provided support on the community mailing lists and sponsored a reception at the annual conference. This was good marketing, of course, but it was also good community citizenship.

At any rate, if you want to make a moral judgment about open source, it’s not the release of open source software that’s the issue. The issue is parasitic consumption of open source software. I’m sure all of the large tech companies would say they support open source software, and they probably do in their own way. But not necessarily in the way that allows small-but-critical projects to thrive.

Toward a more moral ecosystem

Saying “releasing open source software has become immoral” is not helpful. Depriving large companies of open source would also deprive small companies and consumers. And it’s the large companies who could best survive the loss. Witness how MongoDB’s license change has Amazon using DocumentDB instead; meanwhile Linux distributions like Fedora are dropping MongoDB.

It’s an interesting argument, though, because normally when morality and software are in the mix, it’s the position that open source (or “free software” in this context, generally) is the moral imperative. That presents us with one possible solution: licensing your projects under a copyleft license (e.g. the GNU General Public License (GPL)). Copyleft-licensed software can still be used by large corporations to make boatloads of money, but at least it requires them to make source (including of derived works) available. With permissively-licensed software, you’re essentially saying “here’s my code, do whatever you want with it.” Of course people are going to take you up on that offer.

Tech is a garbage industry filled with people making garbage decisions

I work with some great people in the tech space. But the fact that there are terrific people in tech is not a valid reason to ignore how garbage our industry can be. It’s not even that we do bad things intentionally, we’re just oblivious to the possible bad outcomes. There are a number of paths by which I could come to this conclusion, but two recent stories prompted this post.

Can you track me now?

The first was an article last Tuesday that revealed AT&T, T-Mobile, and Sprint made it really easy to track the location of a phone for just a few hundred dollars. They’ve all promised to cut off that service (of course, John Legere of T-Mobile has said that before) and Congress is taking an interest. But the question remains: who thought this was a good idea? Oh sure, I bet they made some money off of it. But did no one in a decision-making capacity stop and think “how might this be abused?” Could a domestic abuser fork over $300 to find the shelter their victim escaped to? This puts people’s lives in danger. Would you be surprised if we learned someone had died because their killer could track them in real time?

It just looks like AI

And then on Thursday, we learned that Ring’s security system is very insecure. As Sam Biddle reported, Ring kept unencrypted customer video in S3 buckets that were widely available across the company. All you needed was the customer’s email address and you could watch their videos. The decision to keep the videos unencrypted was deliberate because (pre-acquisition by Amazon), company leadership felt it would diminish the value of the company.

I haven’t seen any reporting that would indicate the S3 bucket was publicly viewable, but even if it wasn’t, it’s a huge risk to take with customer data. One configuration mistake and you could expose thousands of people’s homes to public viewing. Not to mention that anyone on the inside could still use their access to spy on the comings and goings of people they knew.

If that wasn’t bad enough, it turns out that much of the object recognition that Ring touted wasn’t done by AI at all. Workers in the Ukraine were manually labeling objects in the video. Showing customer video to employees wasn’t just a side effect of their design, it was an intentional choice.

This is bad in ways that extend beyond this example:

Bonus: move fast and brake things?

I’m a little hesitant to include this since the full story isn’t known yet, but I really love my twist on the “move fast and break things” mantra. Lime scooters in Switzerland were stopping abruptly and letting inertia carry the rider forward to unpleasant effect. Tech Crunch reported that it could be due to software updates happening mid-ride, rebooting the scooter. Did no one think that might happen, or did they just not test it?

Technology won’t save us

I’m hardly the first to say this, but we have to stop pretending that technology is inherently good. I’m not even sure we can say it’s neutral at this point. Once it gets into the hands of people, it is being used to make our lives worse in ways we don’t even understand. We cannot rely on technology to save us.

So how do we fix this? Computer science and similar programs (or really all academic programs) should include ethics courses as mandatory parts of the curriculum. Job interviews should include questions about ethics, not just technical questions. I commit to asking questions about ethical considerations in every job interview I conduct. Companies have to ask “how can this be abused?” as an early part of product design, and they must have diverse product teams so that they get more answers. And we must, as a society, pay for journalism that holds these companies to account.

The only thing that can save us is ourselves. We have to take out our own garbage.

You are responsible for (thinking about) how people use your software

Earlier this week, Marketplace ran a story about Michael Osinski. You probably haven’t heard of Osinski, but he plays a role in the financial crisis of 2008. Osinksi wrote software that made it easier for banks to package loans into a trade-able security. These “mortgage-backed securities” played a major role in the collapse of the financial sector ten years ago.

It’s not fair to say that Osinski is responsible for the Great Recession. But it is fair to say he did not give sufficient consideration to how his software might be (mis)used. He told Marketplace’s Eliza Mills:

Most people realized that we wrote a good piece of software that we sold in the marketplace. How people use that software is … you know, you really can’t control that.

Osinski is right that he couldn’t control how people used the software he wrote. Whenever we release software to the world, it will get used how the user wants to use it — even if the license prohibits certain fields of endeavor. This could be innocuous misuse, the way graduate students design conference posters in PowerPoint or businesspeople use Excel for all conceivable tasks. But it could also be malicious misuse, the way Russian troll farms use social media to spread false news or sew discord.

So when we design software, we must consider how actual users — both benevolent and malign — will use it. To the degree we can, we should mitigate against abuse or at least provide users a way to defend themselves from it. We are long past the point where we can pretend technology is amoral.

In a vacuum, technological tools are amoral. But we don’t use technology in a vacuum. The moment we put it to use, it becomes a multiplier for both good and evil. If we want to make the world a better place, we cannot pretend it will happen on its own.

Google Duplex and the future of phone calls

For the longest time, I would just drop by the barber shop in the hopes they had an opening. Why? Because I didn’t want to make a phone call to schedule an appointment. I hate making phone calls. What if they don’t answer and I have to leave a voicemail? What if they do answer and I have to talk to someone? I’m fine with in-person interactions, but there’s something about phones. Yuck. So I initially greeted the news that Google Duplex would handle phone calls for me with great glee.

Of course it’s not that simple. A voice-enabled AI that can pass for human is ripe for abuse. Imagine the phone scams you could pull.

I recently called a local non-profit that I support to increase my monthly donation. They did not verify my identity in any way. So that’s one very obvious way for causing mischief. I could also see tech support scammers using this as a tool in their arsenal — if not to actually conduct the fraud then to pre-screen victims so that humans only have to talk to likely victims. It’s efficient!

Anil Dash, among many others, pointed out the apparent lack of consent in Google Duplex:

The fact that Google inserted “um” and other verbal placeholders into Duplex makes it seem like they’re trying to hide the fact that it’s an AI. In response to the blowback, Google has said it will disclose when a bot is calling:

That helps, but I wonder how much abuse consideration Google has given this. It will definitely be helpful to people with disabilities that make using the phone difficult. It can be a time-saver for the Very Important Business Person™, too. But will it be used to expand the scale of phone fraud? Could it execute a denial of service attack against a business’s phone lines? Could it be used to harass journalists, advocates, abuse victims, etc?

As I read news coverage of this, I realized that my initial reaction didn’t consider abuse scenarios. That’s one of the many reasons diverse product teams are essential. It’s easy for folks who have a great deal of privilege to be blind to the ways technology can be misused. I think my conclusion is a pretty solid one:

The tech sector still has a lot to learn about ethics.

https://twitter.com/BenedictEvans/status/994293491948650496

I was discussing this with some other attendees at the Advanced Scale Forum last week. Too many computer science and related programs do not require any coursework in ethics, philosophy, etc. Most of computing has nothing to do with computers, but instead with the humans and societies that the computers interact with. We see the effects play out in open source communities, too: anything that’s not code is immediately devalued. But the last few years should teach us that code without consideration is dangerous.

Ben Thompson had a great article in Stratechery last week comparing the approaches of Apple and Microsoft versus Google and Facebook. In short: Apple and Microsoft are working on AI that enhances what people can do while Google and Facebook are working on AI to do things so people don’t have to. Both are needed, but the latter would seem to have a much greater level of ethical concerns.

There are no easy answers yet, and it’s likely that in a few years tools like Google Duplex will not even be noticeable because they’ve become so ubiquitous. The ethical issues will be addressed at some point. The only question is if it will be proactive or reactive.

 

 

Silicon Valley has no empathy

That’s not quite fair. The tech industry has no empathy, regardless of geography. And it’s not fair to say “no empathy”, but so many social issues around technology stem from a lack of empathy. I’m no half-Betazoid Starfleet counselor, but in my view there are two kinds of empathy: proactive and reactive.

Reactive empathy is, for example, feeling sad when someone’s cat dies. It’s putting yourself in the shoes of someone who has experienced a Thing. Most functional humans (and yes, I’m including the tech sector here) have at least some amount of reactive empathy. Some more than others, of course, but it’s there.

Proactive empathy is harder. That’s imagining how someone else is going to experience a Thing. It requires more imagination. Even when you know you have to do it, it’s a hard skill to practice.

I touched on this a little bit in a post a few weeks ago, but there I framed it as a lack of ethics. I’m not convinced that’s fully the case. More often, issues are probably more correctly attributed to a lack of empathy. You know why you can’t add alt-text to GIFs in tweets? Because Silicon Valley has no empathy.

I was thinking about this again last week as I drove down to Indianapolis. I had to pass through the remnants of Tropical Storm Cindy, which meant some very heavy downpours. Like a good citizen, I tried to report issues on Waze so that other drivers would have some warning. As it turns out, “tropical deluge” is not a weather option in Waze. Want to know how I can tell it was developed in the Valley?

It’s so easy to say “it works for me!” and then move on to the next thing. But that’s why it’s so important to bring in people who aren’t like you to help develop your product. Watch how others experience it and you’ll probably find all sorts of things you never considered.

Ethics in technology

Technology has an ethics problem. I don’t mean that it’s evil, although I’d forgive you for thinking that. Just take a look at Theranos or Mylan, or Uber’s parade of seemingly-unending scandals. So yes, there are some actors for whom “they lack a moral compass” is the charitable explanation. No, the main problem is that we spend so little time thinking about ethics.

It’s too easy to think that since your intent is good that your results will be, too. But good intent is not sufficient. It’s important to consider impacts as well, especially the impacts on people not like you. (Note that I use “you” to avoid awkward wording. I’m guilty of this as well.) And when you do consider the impacts, don’t be Robert Moses. Does your new web interface make it harder for people who use screen readers? Is your insulin meter easy to misinterpret for someone whose blood sugar is off?

The work we do in the technology sector every day can have a significant impact on people’s lives. And yet ethics courses are often an afterthought in college curricula. Of course, many in tech are self-trained with no real professional body to provide guidance. This means they get no exposure to professional ethics at all. It’s no wonder that we, as an industry, ignore our ethical obligations.

Actually, it’s about ethics in book reviews

Bruce Schneier shared a story earlier this month about how Amazon is apparently mining information to flag book reviews when the reviewer has a relationship with the author. I write book reviews (though I don’t post them to Amazon), so this seems relevant to my interests. I can see why Amazon would do something like this. People buy books, in part, based on reviews. If Amazon’s reviews are credible, people will be more likely to buy well-reviewed books. Plus: ethics!

The first few purchases would likely be unaffected until the buyer has a chance to form an evaluation of credibility. And even then, how much stock do people put into online reviews of any product or service? I tend to only look at reviews in aggregate, unless the specific reviewer has established credibility.

I hope that my occasional book reviews have established some sort of credibility with my ones of readers. I certainly try to make it clear when I might have a bias (e.g. disclosing stock ownership or a personal friendship). Mostly, though, I’m motivated to give accurate reviews in order to advance my own thought leadership. I’m very self-serving sometimes.

On the whole, I appreciate that Amazon is trying to keep reviews fully-disclosed. I just don’t think they’re doing it very well. If a reviewer has a relationship with the reviewee and it is properly disclosed, there’s no reason to suppress the review.

Full disclosure: I own a small number of shares in Amazon.

Is storm chasing unethical?

Eric Holthaus wrote an article for Slate arguing that storm chasing has become unethical. This article has drawn a lot of response from the meteorological community, and not all of the dialogue has been productive. Holthaus makes some good points, but he’s wrong in a few places, too. His biggest sin is painting with too wide a brush.

At the root of the issue is Mark Farnik posting a picture of a mortally wounded five-year-old girl. The girl was injured in a tornado that struck Pilger, Nebraska and succumbed to the injuries a short time later. To be perfectly clear, I have no problem with Farnik posting the picture, nor do I have a problem with him “profiting” off it. Photojournalism is not always pleasant, but it’s an important job. To suggest that such pictures can’t be shared or even taken is to do us a disservice. 19 years on, the picture of a firefighter holding Baylee Almon remains the single most iconic image from the Oklahoma City bombing.

None of this would have come up had Farnik not posted the following to Facebook: “I need some highly photogenic and destructive tornadoes to make it rain for me financially.” That’s a pretty awful statement. While I enjoy tornado video as much as anyone, I prefer them to occur over open fields. Nobody I know ever wishes for destruction, and I’d be loath to associate with anyone who did. This one sentence served as an entry point to condemn an entire hobby.

Let’s look at Holthaus’ points individually:

  1. Storm chasers are not saving lives. Some chasers make a point to report weather phenomena to the local NWS office immediately. Some chasers do not. Some will stop to render assistance when they come across damage and injuries. Some will not. In both cases, my own preference is for the former. Patrick Marsh, the Internet’s resident weather data expert, found no evidence that an increase in chasers has had any effect on the tornado fatalities. In any case, not saving lives is hardly a condemnation of an activity. Golf is not an inherently life-saving avocation, but I don’t see anyone arguing that it’s unethical.
  2. Chasing with the intent to profit… adds to the perverse incentive for more and more risky behavior. Some people act stupidly when money or five minutes of Internet fame are on the line. This is hardly unique to storm chasing. Those chasers who put themselves or others in danger are acting stupidly. The smart ones place a premium on safety. What’s more, the glee that chasers often express in viral videos is disrespectful to people who live there and may be adversely affected by the storm. Also true. The best videos are shot from a tripod and feature quiet chasers.
  3. A recent nationwide upgrade to the National Weather Service’s Doppler radar network has probably rendered storm chasers obsolete anyway. Bull. Dual-polarization radar does greatly aid the radar detection of debris, but ground truth is still critical. Radar cannot determine if a wall cloud is rotating. It cannot determine if a funnel cloud is forming. It cannot observe debris that does not exist (e.g. if a tornado is over a field). If you wait for a debris signature on radar, you’ve already lost. In a post to the wx-chase mailing list, NWS meteorologist Tanja Fransen made it very clear that spotters are not obsolete. To be clear, spotters and chasers are not the same thing, even if some people (yours truly, for example) engage in both activities.

The issue here is that in the age of social media, it’s easier for the bad eggs to stand out. It’s easy to find chasers behaving stupidly, sometimes they even get their own cable shows. The well-behaved chasers, by their very nature, tend to not be noticed. Eric Holthaus is welcome to not chase anymore, that’s his choice. I haven’t chased in several years, but that’s more due to family obligations than anything else. I have, and will continue to, chase with the safety of myself and others as the top priority.

Scattered thoughts on sysadmin ethics

Last week, a Redditor posted a rant titled “why I’m an idiot, but refuse to change my ways.” I have to give him (or her, but let’s stick with “him” for the sake of simplicity and statistical likelihood) credit for recognizing the idiocy of the situation, but his actions in this case do a disservice to the profession of systems administration. My initial reaction was moderated by my assumption that this person is early-career and my ability to see some of myself in that post. But as I considered it further, I realized that even in my greenest days, I did not consider unplanned outages to be a license for experimentation.

Not being in a sysadmin role anymore, I’ve had the opportunity to consider systems administration from the perspective of a learned outsider. I was pleasantly surprised to see that the responses to the poster were fairly aghast. There’s a great deal of ethical considerations for sysadmins, partly due to the responsibilities of keeping business critical services running and partly due to the broad access to business and personal data. So much of the job is knowing the appropriate behavior, not just the appropriate technical skills.

This may be the biggest benefit of a sysadmin degree program: training future systems administrators the appropriate professional ethic. I am by no means trying to imply that most sysadmins are lacking. On the contrary, almost all of the admins I’ve encountered take their ethical requirements very seriously. Nonetheless, a strain of BOFHism still runs through the community. As the world becomes increasingly reliant on computer systems, a more rigorous adherence to a certain philosophy will be required.