I was going to spend some time writing a post on insecure open source code. However, I don’t have much time today and I wanted to post something. Cue discovery of Ben Chelf’s blog on insecurity in open source.
The quote that I liked is this:
Of the more than 150 open-source and proprietary software applications that we have analyzed in this study, closed-source software code grabbed 11 of the top 15 spots for the highest quality and security.
All source code has bugs. Now if I’m employed as a software engineer to find those bugs (and these days there are loads of tools out there to perform static code analysis as a starting point), I am incentivised to find, report and fix those bugs.
In a lot of open source projects there is only a small core of contributors and probably an even smaller core who understand the overall system. Maybe one person is looking after the security. Well of course, it is open source and therefore peer-reviewed. But the problem is, it isn’t.
Flaws in openssl lay undiscovered for two years. Or did they? A real attacker is not going to perform responsible disclosure on a bug. They will exploit it for as long as it remains there. Open source means usually that an internet-based post-mortem can take place.
The problem for open source is that the blueprints are laid open. If I’m an attacker and I want to find my way in, it’s like a burglar being able to case inside a building, right through to the safe without being challenged at all by security guards or employees.
For closed software, an attacker has to go through an extensive debugging and decompilation process in many cases – a process which raises the barrier of attack in terms of effort, cost and time. There is something to be said about closed-source software. One of the few tools a defender has in his toolbox is secrecy. Used responsibly, by still ensuring software quality and secure coding it does significantly raise the bar for hackers. This is probably also a good time to mention the SAFEcode forum who promote secure coding practices.
Obscurity and secrecy are defensive tools
Bruce Schneier’s no security through obscurity principle has misled people by its black and white view. It really refers to people hiding an insecure system by not telling people it doesn’t have security. This is different to using obscurity and secrecy as additional security mechanisms to help you defend a system. Unfortunately, the vocals in the open source community don’t allow this side of the argument to be voiced too often. Ben Chelf’s article was in 2006 and the points still hold true.
What’s the mobile angle? Well perhaps the mobile industry is sitting on a ticking time-bomb. With increasing mobile convergence and interconnected systems based on open source, the deeper the root cause of a flaw, the more systems can be exploited. Perhaps someone has found a flaw already?
I recommend reading http://www.oss-watch.ac.uk/resources/securityintro.xml which I think covers the more important points: market size and ability to engage your own security process.
Ha. That will raise some comments… You're right.Exercise: Go to CVE and look at the pattern of vulnerability reports for two broadly similar products (say Thunderbird and Outlook). What can we conclude? If my objective is to minimise the likelihood of being exploited via an unreported/unpatched vulnerability – rather than absolute minimization of the vulnerability count – which product should I choose?[note to the reader: if you want to fund research in answering this question… get in touch 🙂 🙂 ]