On learning, growth, and trust

Here are two separate ideas about programming, Internet security, Internet architecture, and free software. Both of them are fundamental to everything those of us who work on free software are doing.

  1. Writing secure and reliable code is a highly complex and demanding task. It's something that one has to learn, like any other skilled profession. It's not something we're very good at teaching via any mechanism other than apprenticeship and experimentation. The field is changing quickly; if you took ten years off from writing security-critical code, you would expect to have to learn multiple new tools (static analysis, testing techniques, security features), possibly new programming languages, and not infrequently new ways of thinking about threat models and vulnerabilities.

  2. Nearly every computer user trusts tens of thousands of other programmers and millions of lines of code with their day-to-day computer security and reliability, without auditing that code themselves. Even if you have the skill required to audit the code (and very, very few people have the skill to audit all of the code that they use), you do not have the time. Therefore, our computer security is built on trust. We're trusting other programmers not to be malicious, which is obvious, but we're also trusting other programmers to be highly skilled, careful, cautious (but still fast and productive, since we quickly abandon software that isn't "actively developed"), and constantly adopting new techniques and treat models.

I think both of those principles are very widely understood and widely acknowledged. And, as we all know, both of those principles have failure modes, and those failures mean that our computers are nowhere near as secure as we would like them to be.

There has been quite a lot of technical discussion about both of these principles in recent days and months, ranging from better code analysis and testing through the flaws in particular programming languages to serious questions about the trust model we use for verifying the code that we're running. I'm going to switch gears away from that discussion for a moment to talk about a social aspect.

When a piece of code that one is using has a security vulnerability, it is not unusual to treat that as a trust failure. In other words, technical flaws are very quickly escalated to social flaws. This is less likely among practitioners, since we're all painfully aware of how many bugs we have in our own code, but it's still not unheard of, particularly if the authors of the code do not immediately and publicly show a socially-acceptable level of humility and contrition (in the middle of what is often a horrifically stressful experience).

I think this reaction comes largely from fear. Anyone who has given it a moment's thought is painfully aware of just how much code they are running on trust, and just how many ways it could be compromised, accidentally or maliciously. And how frequently that code is compromised. The less control one has over a situation, the more terrifying it is, and the cold reality is that we have very little control over our computer security. There is a natural tendency, when afraid, to look for targets for that fear.

Now, go back and think about the first principle.

Anyone who has ever worked on or near free software is painfully aware that we have far more good ideas about how to improve computing and security than we have skilled resources to execute on those ideas. My own backlog of things that I've already thought about, know would be good ideas, and simply have to implement is probably longer than my remaining lifespan. I suspect that's the case for nearly all of us.

In other words, we have a severe shortage of programmers who care and who have skill. We desperately need more skilled programmers who can write secure and reliable code. We may (and I think do) also need better tools, techniques, languages, protocols, and all the other machinery that we, as technical people, spend most of our time thinking about. But none of that changes the fact that we need more skilled people. In fact, it makes that need more acute: in addition to skilled people to write the code we use, we need skilled people to write the tools.

Skilled people are not born. They're made. And in professions where training techniques are still in their infancy and where we don't have a good formal grasp on which techniques work, those skilled people are made primarily through apprenticeship, experimentation, and learning from failure.

Worse, people who were skilled do not remain skilled without continually participating in that learning process. See the above point about a fast-changing field with evolving best practices. It's not enough to know how to write secure code to the best known practices today, or even enough to retrofit all of your existing code to current knowledge (which is often so large of an effort as to be practically impossible). You have to constantly, continually learn more, for which there is no reliable formal training.

We have to try, fail, try again, and fail better.

But failure that leads to a security vulnerability is treated as a loss of trust. We trusted that person to write secure code that we could use. They failed. Now we can't trust them. Based on the trust model of security, we should revoke their ability to try again and instead rely on people who have not failed, since that will make us more secure.

Except now we just broke the learning process. And there's no such thing as a programmer who can stop learning. So what does that do to our resource pool?

It's sadly ironic, but I believe the free software community writ large has a very serious collaboration problem: we do not tolerate each other's learning processes. This leads to a wide variety of social failures around hostile communities and the failures of meritocracy that other people have talked about at much greater length. But even if you set that all aside, it threatens our security. We need secure code, a lot of it, a lot more than we have right now. To get that code, we need people who can write it. We need to grow, encourage, and support those people and enable their learning processes.

Code is written by people. If we rip people apart when they write bad, insecure code, we don't get better, secure code. We get fewer people writing security code. We get far fewer people writing security code in public, since some of the people who haven't been ripped apart will look at that experience and say, "No way am I going near that problem area. It's too scary. I don't want to end up like those programmers."

Fewer people writing security code means fewer people learning how to write better security code.

Fewer people capable of writing good, secure code is not a solution to any of our problems.

If we do not tolerate, support, and encourage the learning process required to become a skilled programmer, or maintain one's skill in programming, we are destroying our future as a community.

When you find code that is broken and badly written, you have found a problem that should be reported, analyzed, and corrected. You have also found a programmer who is about to have one of two very different types of experiences. Either they are about to learn how to become a better programmer, or they are about to be publicly shamed, humiliated, and treated as untrustworthy. Which branch they take is partly up to them, but it's also heavily influenced by how all of us react socially to the discovery of bad code.

One of those branches leads to more good, secure code being written in the future. The other does not.

Posted: 2014-04-23 15:55 — Why no comments?

Last spun 2022-02-06 from thread modified 2014-04-23