Maybe we should think about how we use language ecosystems

Over the weekend, Bleeping Computer reported on thousands of packages breaking because the developer of a package inserted infinite loops. He did this with intent. The developer had grown frustrated with his volunteer labor being used by large corporations with no compensation. This brings up at least three issues that I see.

FOSS sustainability

How many times have we had to relearn this lesson? A key package somewhere in the dependency chain relies entirely on volunteer or vastly-underfunded labor. The XKCD “Dependency” comic is only a year and a half old, but it represents a truth that we’ve known since at least the 2014 Heartbleed vulnerability. More recently, a series of log4j vulnerabilities made the holidays very unpleasant for folks tasked with remediation.

The log4j developers were volunteers, maintaining code that they didn’t particularly like but felt obligated to support. And they worked their butts off while receiving all manner of insults. That seemingly the entire world depended on their code was only known once it was a problem.

Many people are paid well to maintain software on behalf of their employer. But certainly not everyone. And companies are generally not investing the sustainability of the projects they rely on.

We depend on good behavior

The reason companies don’t invest in FOSS in proportion to the value they get from it is simple. They don’t have to. Open source licenses don’t (and can’t) require payment. And I don’t think they should. But companies have to see open source software as something to invest in for the long-term success of their own business. When they don’t, it harms the whole ecosystem.

I’ve seen a lot of “well you chose a license that let them do that, so it’s your fault.” Yes and no. Just because people can build wildly profitable companies while underinvesting in the software they use doesn’t mean they should. I’m certainly sympathetic to the developers position here. Even the small, mostly unknown software that I’ve developed sometimes invokes a “ugh, why am I doing this for free?” from me—and no one is making money off it!

But we also depend on maintainers behaving. When they get frustrated, we expect they won’t take their ball and go home as in the left-pad case or insert malicious code as in this case. While the anger is understandable, a lot of other people got hurt in the process.

Blindly pulling from package repos is a bad idea

Speaking of lessons we’ve learned over and over again, it turns out that blindly pulling the latest version of a package from a repo is not a great idea. You never know what’s going to break, even if it’s accidental. This still seems to be a common mode in some language ecosystems and it baffles me. With the increasing interest in software supply chains, I wonder if we’ll start seeing that as an area where large companies suddenly decide to start paying attention.

How I configure sshd at home

My “server” at home isn’t particularly important to the outside world. But by virtue of being on the Internet, it’s subject to a lot of SSH logins. The easiest thing to do is to shut it off from the outside world. But I need to access it when away from home, so that’s not a particularly useful solution.

So what I’ve done is use the SSH daemon’s (sshd) configuration to reduce the risk profile. The first thing I wanted to do is forbid login as root:

PermitRootLogin no

I also don’t want anyone to be able to log in with passwords. “Anyone” is essentially me here, but since I have sudo on the box, if someone is able to figure out my password they are able to get root remotely.

PermitRootLogin no
ChallengeResponseAuthentication no

Finally, I want to restrict remote login to only explicitly-permitted users. I do this with a dedicated Unix group that I call “sshusers”.

AllowGroups sshusers

These are pretty standard changes and not really worth a blog post. But it turns out that sshd has a very flexible configuration. When a client is coming from inside the LAN, I want to enable password authentication. This is particularly helpful when I’m installing a new system and don’t have SSH keys setup yet.

Match Address 192.168.1.*
    PasswordAuthentication yes

Also within the LAN, it’s easier to run Ansible playbooks across machines if the root user can SSH in with a key. So I combine user and address matching to permit key-based root login only from the server with the Ansible playbooks.

Match User root Address 192.168.1.10
    AllowGroups root sshusers
    PermitRootLogin prohibit-password

Finally, I want my ex to be able to access the server in order to access photos, etc. So I set up her account so that she can use an sftp client but can’t log in (not that she would anyway, but it was a fun challenge to set this up).

Match User angie
    ForceCommand internal-sftp
    PasswordAuthentication yes
    PermitTunnel no
    AllowAgentForwarding no
    AllowTcpForwarding no
    X11Forwarding no

Why didn’t you … ?

The configuration above isn’t the only way to secure my SSH server from the outside world. It’s not even necessarily the best. I could, for example, move SSH to a different port, which would cut down on the drive-by attempts significantly. I resisted that in the past because I felt “security through obscurity isn’t security.” But in practice, it can be a layer in a more secure approach. In the past, I also recall some clients I used (particularly on mobile) not having the ability to use a non-default port. If that recollection is correct, it seems to also be outdated now. So basically I’m still on port 22 because of inertia.

I could also set up a VPN server and use that for remote access. That requires an additional service to manage, of course. And it also presents challenges when I’m also connected to a work VPN server. The sshd configuration approach is a simpler way for my needs.

If everyone followed good password advice, we’d be less secure

Passwords are hard. To be useful, they must be hard to guess. But the rules we put in place to make them hard to guess also make them hard to remember. So people do the minimum they can get away with.

Earlier this week, security company Webroot took a look at the unintended consequences of password constraints. The rules organizations set in order to ensure passwords are sufficiently complex reduce the total number of possible passwords. This can make automated password guessing more

Good passwords are easy for the user to remember and hard for computers and other humans to guess. Let’s say I wanted to use a password like 2Clippy2Furious!! Various password checking sites rate it highly. It’s 18 characters long and contains upper- and lower-case letters, digits, and special characters. But because it contains consecutive repeating letters, some companies won’t allow it.

Writing for Webroot, Randy Abrams says “it’s length, not complexity that matters.” And he’s right. That’s the point behind the “correct horse battery staple” password in XKCD #936. So let’s all do that, right?

Well…it’s not so simple. If I were trying to brute force passwords, and I knew everyone was using four (or five or six) words, suddenly instead of “CorrectHorseBatteryStaple” being 26 characters, it’s four. Granted, the character set goes from 95 to (using /usr/share/dict/words on my laptop) 479,828. “CorrectHorseBatteryStaple” is many powers of 10 more secure if the attacker doesn’t know you’re using words.

And let’s be real: they don’t. This hypothetical weakness has a long time before it becomes a real concern. Don’t believe me? Just look at the password dumps when a site gets hacked. There are a lot of really bad passwords out there. If we took all the constraints off (except for minimum length), people would just use really dumb, easily-guessed passwords again. But it amuses me that if everyone followed good password advice, we’d actually make it worse for ourselves. Passwords are hard.

Sidebar: Yes, I know

The savvier among you probably read this and thought “it’s better to use a random string that you never have to memorize because your password manager handles it for you. Just set a very long and memorable password on that and you’re good to go.” Yes, you’re right. But people, even those who use password managers, will often go to memorable passwords for low-risk sites or passwords they have to use often (e.g. to log in to their computer so they can access the password manager). 

What might government regulation of infosec look like?

“Terrible” is the most likely answer. But let’s assume we’re talking about regulation that is effective and sound (from both a technical and civil liberties perspective).

On Sunday’s episode of This Week in Tech, the panel discussed the possibility of government regulation of internet security. I’m not fully convinced that any regulation is necessary, but the case for some form of consumer protection grows with every breach. And I don’t think it likely that companies will self-regulate.

So as neither a policy nor technical security expert, what sort of plan would I draw up?

Good infosec regulation

Any workable laws or regulations would have to be defense-oriented. It may sound like victim-blaming, but I don’t see any other path. Companies must meet some minimum standard of protection or face non-trivial fines in the event of a breach. But if a breach occurs and the company met the standard, I would not punish them. Even the best organization is going get compromised in some way at some point.

In an ideal scenario, the punishment would instead be on the bad actor. The international nature of the Internet makes that a near impossibility. And given that a company is acting with some degree of public trust, I don’t find it unjust to demand a certain level of security compliance.

In order to avoid a heavy administrative burden, I wouldn’t require external audits (at least not for companies below a certain size). It could be something as simple as “document the security plan and show that you’ve kept to it”. The plan would have some number of required elements (e.g. customer passwords aren’t stored in plaintext) and a further list of suggested elements maintained by an expert body. So long as your plan isn’t garishly incompetent and you stick to it, you’re in the clear from a government punishment perspective.

Of course, certain systems would still be subject to heavier burden. I wouldn’t do away with HIPAA or PCI in favor of this new model. But you can see how less-sensitive services would be nudged toward better consumer protection.

Bad infosec regulation

So what wouldn’t I include? I certainly would not require any encryption backdoor (I might even prohibit it) or prohibit users’ use of encryption. That’s an obvious choice in light of the civil liberty requirement.

I also would not include any specific technology or process in the law/regulation itself. The technology landscape is too dynamic and diverse for that to be effective. The best we can hope for is to set broader principles that need updated on the order of years.

The reality of regulation

I don’t see any meaningful regulation happening in the near future. For one, it’s a very difficult problem to solve from both a technical and a policy perspective. More importantly, it could be politically hot, and we all know how pleasant the current environment in Washington is.

At most, we may see a few laws, probably bad, that nibble around the edges. But as the digital age continues to change society as we’ve known it, the law must catch up somehow.

How not to code your bank website

When is a number not a number? When it is a PIN. Backstory: recently my bank overhauled its website. On the whole, it’s an improvement, but it hasn’t been entirely awesome. One of the changes was that special characters were no longer allowed in the security questions. As it turns out, that’s a good way to lock your users out. Me included.

Helpfully, if you lock yourself out, there’s a self-service unlock feature. You just need your Social Security Number and your PIN (and something else that I don’t recall at the moment). Like any good form, it validates the fields before proceeding. Except holy crap, if your PIN begins with 0, pressing “Submit” means the PIN field becomes three characters and you can never proceed. That’s right: it treats the PIN as an integer when really it should be a string.

I’ve made my share of dumb mistakes, so I try to be pretty forgiving. But bank websites need to be held to a very high standard, and this one clearly misses the mark. Breaking existing functionality and mistreating PINs are bad enough, but the final part that lead me to a polite-but-stern phone call was the fact that special characters are not allowed in the password field. This is 2016 and if your website can’t handle special characters, I have to assume you’re doing something terribly, terribly wrong.

In the meantime, I’ve changed my PIN.

What3Words as a password generator

One of my coworkers shared an interesting site last week. What3Words assigns a three-word “address” to every 3m-by-3m square on Earth. The idea behind the site is that many areas of the world don’t have street numbers and names, and a three-word combination is much easier to remember than latitude/longitude pairs. Similar combinations are deliberately placed far apart so as to make them unambiguous.

It’s an interesting idea, but I immediately began thinking of a different use for it. What if people used it to come up with long, memorable, and hard-to-guess passwords? After all, the longer a password is (generally speaking), the better it is. And while correcthorsebatterystaple might be amusing, it’s much easier to remember a place. So you pick a memorable spot on the map and now you have a long password that you can look up if you forget it.

image

XKCD "Password Strength" by Randall Munroe. Used under the Creative Commons Attribution-NonCommercial 2.5 license.

This method isn’t perfect. The main problem is that with a 3x3m grid, it’s very sensitive to differences in location. But especially for the technically unsavvy, it can be a good way to enable better password habits.

Sidebar: why Randall Munroe is wrong (-ish)
There’s another reason What3Words isn’t perfect, and the XKCD cartoon above is subject to the same weakness. If a password cracker knows people are mostly using concatenated words, they’ll start guessing combinations of words instead of combinations of characters. These sorts of passwords are stronger when they’re rare. Of course, there are trivial ways to mitigate the risks (insertion of special characters, selective capitalization, etc.).

Still, given the choice between a 20-character random string and a 20-character set of words, I’ll take the random string as my password (unless the site/app disables paste, in which case I’ll cry). I use a password manager precisely so I don’t have to worry about trying to balance security and memorability. The What3Words method could be helpful as a password for my password safe, though.

Another reason to disable what you’re not using

A common and wise security suggestion is to turn off what you’re not using. That may be a service running on a computer or the bluetooth radio on a phone. This reduces the potential attack surface of your device and in the case of phones, tablets, and laptops helps to preserve battery life. On the way to a family gathering over the weekend, I discovered another, less intriguing reason.

As I exited the interstate, I passed a Comfort Inn. Having stayed a Comfort Inns in the past, my phone remembered the Wi-Fi network and apparently it tried to connect. The signal was just strong enough that my phone switched from 4G to Wi-Fi, and since the Comfort Inn had a registration portal, this messed up the navigation in the maps app. Oops.

I turned the Wi-Fi antenna off for the rest of the trip. It was a good reminder to shut off what I’m not using.

CERIAS Recap: Featured Commentary and Tech Talk #3

Once again, I’ve attended the CERIAS Security Symposium held on the campus of Purdue University. This is the final post summarizing the talks I attended.

I’m combining the last two talks into a single post. The first was fairly short, and by the time the second one rolled around, my brain was too tired to focus.

Thursday afternoon included a featured commentary from The Honorable Mark Weatherford, Deputy Undersecretary of Cybersecurity at the U.S. Department of Homeland Security. Mr. Weatherford was originally scheduled to speak at the Symposium, but restrictions in federal travel budgets forced him to present via pre-recorded video. Mr. Weatherford opened with an observation that “99% secure means 100% vulnerable.” There are many cases where a single failure in security resulted in compromise.

The cyber threat is real. DHS Secretary Napolitano says infrastructure is dangerously vulnerable to cyber attack. Banks and other financial institution have been under sustained DDoS attack and it has become very predictable. In the future, there will be more attacks, they will be more disruptive, and they will be harder to defend against.

So what does DHS do in this space? DHS provides operational protection for the .gov domain. They work with the .com sector to improve protection, especially against critical infrastructure. DHS responds to national events and works with other agencies to foster international cooperation.

Cybersecurity got two paragraphs in President Obama’s 2013 State of the Union address. Obama’s recent cybersecurity executive order has goals of establishing an up-to-date cybersecurity network and enhancing information sharing among key stakeholders. DHS is involved in the Scholarship for Service student program which is working to create professionals to meet current and future needs.

The final session was a tech talk by Stephen Elliott, Associate Professor of Technology Leadership and Innovation at Purdue University, entitled “What is missing in biometric testing.” Traditional biometric testing is algorithmic, with well-established metrics and methodologies. Operation testing is harder to do because test methodologies are sometimes dependent on the test. Many papers have been written about the contributions of individual error on performance. Some papers have been written on the contribution of metadata error. Elliott is focused on training: how do users get accustomed to devices, how they remember how to use them, and how can training be provided to users with a consistent message.

One way to improve biometrics is understanding the stability of the user’s response. If we know how stable a subject is, we can reduce the transaction time by requiring fewer measurements. Many factors, including the user, the agent, and system usability affect the performance of biometeric systems. Improving performance is not a matter of simply improving the algorithms, but improving the entire system.

Other posts from this event:

CERIAS Recap: Panel #3

Once again, I’ve attended the CERIAS Security Symposium held on the campus of Purdue University. This is one of several posts summarizing the talks I attended.

The “E” in CERIAS stands for “Education”, so it comes as no surprise that the Symposium would have at least one event on the topic. On Thursday afternoon, a panel addressed issues in security education and training. I found this session particularly interesting because it paralleled many discussions I have had about education and training for system administrators.

Interestingly, the panel consisted entirely of academics. That’s not particularly a surprise, but it does bias the discussion toward higher education issues and not vocational-type training. This is often a contentious issue in operations education discussions. I’m not sure if such a divide exists in the infosec world. Three Purdue professors sat on the panel: Allen Gray, Professor of Agriculture; Melissa Dark, Professor of Computer & Information Technology and Associate Directory of Educational Programs at CERIAS; and Marcus Rogers, Professor of Computer & Information Technology. They were joined by Ray Davidson, Dean of Academic Affairs at the SANS Technology Institute; and Diana Burley, Associate Professor of Human and Organizational Learning at The George Washington University.

Professor Gray began the opening remarks by telling the audience he had no cyber security experience. His expertise is in distance learning, as he is the Director of a MS/MBA distance program in food and agribusiness management. The rise of MOOCs has made information more available than ever before, but Gray notes that merely providing the information is not education. The MS/MBA program offers a curriculum, not just a collection of courses, and requires interaction between students and instructors.

Dean Davidson is in charge of the master’s degree programs offered by the SANS Technology Institute. This is a new offering and they are still working on accreditation. Although it incorporates many of the SANS training courses, it goes beyond those. “The old days of protocol vulnerabilities are starting to go away, but people still need to know the basics,” he said. “Vulnerabilities are going up the stack. We’re at layers 9 and 10 now.” Students need training in legal issues and organizational dynamics in order to become truly effective practitioners.

Professor Dark joined CERIAS without any experience in providing cybersecurity education. In her opening remarks, she talked about the appropriate use of language: “We always talk about the war on defending ourselves, the war on blah. We’re not using the language right. We should reserve ‘professionalization’ for people who deal with a lot of uncertainty and a lot of complexity.” Professor Burley also discussed vocabulary. We need to consider who is the cybersecurity workforce. Most cybersecurity professionals are in hybrid roles, so it’s not appropriate to focus on the small number who have roles entirely focused on cybersecurity.

Professor Rogers drew parallels to other professions. Historically, professionals of any type have been developed through training, certification, education, apprenticeship or some combination of those. In cybersecurity, all of these methods are used. Educators need to consider what a professional in the field should know, and there’s currently no clear-cut answer. How should education respond? “Better than we currently are.” Rogers advocates abandoning the stove pipe approach. Despite talk of being multidisciplinary, programs are often still very traditional.”We need to bring back apprenticeship and mentoring.”

The opening question addressed differences between education and training. Gray reiterated that disseminating information is not necessarily education; education is about changing behavior. Universities tend to focus on theory, but professionalization is about applying that theory. As the talk drifted toward certifications, which are often the result of training, Rogers said “we’re facing the watering-down of certifications. If everybody has a certification, how valuable is it?” Dark launched a tangent when she observed that cybersecurity is in the same space as medicine: there’s so much that practitioners can’t know. This lead to a distinction being made (by Spafford, if I recall correctly) between EMTs and brain surgeons as an analogy for various cybersecurity roles. Rogers said we need both.They are different professions, Burley noted, but they both consider themselves professionals.

One member of the audience said we have a great talent pool entering the work force, but they’re all working on same problems. How many professionals do we need? Davidson said “we need to change the whole ecosystem.” When the barn is on fire, everyone’s a part of the bucket brigade; nobody has time to design a better barn or better fire fighting equipment. Burley pointed out that the NSF’s funding of scholarships in cybersecurity is shifting toward broader areas, not just computer science. This point was reinforced by Spafford’s observation that none of the panelists have their terminal degree in computer science. “If we focus on the job openings that we have right now,” Rogers said, “we’re never going to catch up with the gaps in education.” One of the panelists, in regard to NSF and other efforts, said “you can’t rely on the government to be visionary. You might be able to get the government to fund vision,” but not set it.

The final question was “how do you ensure that ethical hackers do not become unethical hackers?” Rogers said “in education, we don’t just give you knowledge, we give you context to that knowledge.” Burley drew a parallel to the Hippocratic Oath and stressed the importance of socialization and culturalization processes. Davidson said the jobs have to be there as well. “If people get hungry, things change.”

Other posts from this event:

CERIAS Recap: Fireside Chat

Once again, I’ve attended the CERIAS Security Symposium held on the campus of Purdue University. This is one of several posts summarizing the talks I attended.

The end of Christopher Painter’s talk transitioned nicely into the Fireside Chat with Painter and CERIAS Executive Director Gene Spafford. Spafford opened the discussion with a topic he tried to get the first panel to address: privacy. “Many people view security as the most important thing,” Spafford observed, which results in things like CISPA which would allow unlimited and unaccountable sharing of data with government. According to Painter, privacy and security “are not incompatible.” The Obama administration works to ensure civil liberty and privacy protections are built-in. Painter also disagreed with Spafford’s assertion that the U.S. is behind Europe in privacy protection. The U.S. and the E.U. want interoperable privacy rules. They’re not going to be identical, but they should work together. Prosecution of cyber attacks, according to Painter, aids privacy in the long run.

An audience member wanted to know how do to address the risk of attribution and proportional response now that cyber defense is transitioning from passive to active. Painter noted that vigilante justice is dangerous due to the possibility of misattribution and the risk of escalating the situation. “I don’t advocate a self-help approach,” he said.

Another in the audience expressed concern with voluntary standards concern me, observing that compliance is spotty in regulated industries (e.g. health care). He wondered if these voluntary international standards were intended to be guidance or effective? Painter said they are intended to set a “standard of care”. Governments will need to set incentives and mechanisms to foster compliance. Spafford pointed out that there are two types of standards: minimum standards and aspirational standards. Standards can also institutionalize bad behavior, so it is important to set the right standards.

Painter had earlier commented that progress has been structurally. An audience member wondered where the gaps remain. The State Department, according to Painter, is a microcosm of the rest of the Executive Branch. Within State, they’ve gone a good job of getting the parts of the agency working well together. They weren’t cooperating operationally as much as we could, but that’s improved, too. Spafford asked about state-level coooperation. 9/11 drove a great deal of state cooperation, but we’re now beginning to see states participate more in cyber efforts.

One member of the audience said “without accountability, you have no rule of law. How do you have accountability on the Internet?” Painter replied there are two sides to the coin: prevention and response. Response is more difficult. there have been efforts by the FBI and others in the past few years to step up enforcement and response. Spafford pointed out that even if an attack has been traced to another country with good evidence, the local government will sometimes deny it. Can they be held accountable? We have to build the consensus that this is important, said Painter. If you’re outside that consensus you will become isolated. A lot of countries in the developing world are still building capabilities. They want to stop it, but they can’t. Cybercrime is often used to facilitate traditional crime. That might be a lever to help encourage cooperation from other nations.

Fresh off this mornings attack of North Korean social media accounts, the audience wanted to hear comments on  Anonymous attacking governments. “If you’re doing something that’s a crime,” Painter said, “it’s a crime.” Improving attribution can help prevent or prosecute these attackers. The conversation moved to the classification of information when Spafford observed that some accuse goverments of over-classifying information. Painter said that has not been his experience. When people reveal classified information, that damages a lot of efforts. We have to balance speech and protection. The openness of the Internet is key.

Two related questions were asked back to back. The first questioner observed that product manufacturers are good at externalizing the cost of insecurity and asked how producers can be incentivized to produce more secure products. The second question dealt with preventing misuse of technology, with The Onion Router being cited as an example of a program used for both good and bad. Painter said the market for security is increasing, with consumers becoming more willing to pay for security. Industry is looking at how to move security away from the end user in order to make it more transparent. Producers can’t tell how their work will be used, but even when technology is used to obscure attribution, there are other ways to trace criminals (for example, money trails).

One other question asked how we address punishment online. Painter said judges have discretion in sentences and U.S. sentencing laws are “generally pretty rational.”  The penalities in cyberspace are generally tied to the penalties in the digital world. In seeming contradiction, Spafford pointed out that almost everything in the Computer Fraud and Abuse Act is a felony and asked Painter if there is room to have more misdemeanor offenses in federal law? Painter said there are misdemeanor offenses in state and local laws. Generally, Spafford says, policymakers need better understanding of tech, but tech people need better understanding of law.

There were other aspects of this discussion that I struggle to summarize (especially given the lengthy nature of this post). I do think this was the most interesting session of the entire symposium, at least for me. I’ve recently found my interest in law and policy increasing, and I lament the fact that I’ve nearly completed my master’s degree at this point. I actually caught myself thinking about a PhD this morning, which is an absolutely unnecessary idea at this stage in my life.

Other posts from this event: