Firstly, for the advantages. Eric S. Raymond's magnum opus, The Cathedral and The Bazaar, claims that “given enough eyeballs, all bugs are shallow”. Termed Linus's Law, it rationalises that since open-source projects have dozens of people scrutinising and improving each other's work, that software vulnerabilities will be removed. Furthermore, opening source-code to the public avoids a tendency towards “security by obscurity”, where the secrecy of the code is the only factor preventing a successful attack. On face value, hiding the design of a program might appear rational, but complacency leads to products which resist attacks only through ignorance. Once the keys to the castle are reverse-engineered, all bets are off. We often see vendors release proprietary-products claimed to be secure, but once the smoke and mirrors are removed, the implementation is usually flawed.
Arguably, those dedicated to open-source projects are devoted to programming, and therefore better-quality developers. Clearly you would expect individuals that work all day before returning home to revel in late-night development to have a greater passion for projects, especially with the removal of high-pressure deadlines and your manager looking over your shoulder. Programming purely for joy and reputation, open-source developers likely put greater effort into their work, which is found when people love what they do. This added care and attention surely has benefits for software security, as bugs can be located easily free from the pressure of a work-like environment.
Within the proprietary world, we must wait until the second Tuesday of the month for updates. In the open-source community, patches can be constantly released for those who wish to receive them. Once a dangerous vulnerability is detected it can be removed, with users immediately experiencing benefits rather than waiting in limbo for several weeks. With the removal of corporate bureaucracy and company deadlines, any contributor who spots an issue can submit a fix. This is in stark contrast to the commercial world, where security professionals must pressure vendors to release patches through the threat of vulnerability disclosure. Even if alterations do not proceed the way you wish, you still have an option: fork the project. Many secure product-variations have been developed, such as TrustedBSD, which attracts skilled developers to its cause.
There is no financial or commercial motivation, and whilst this can be viewed negatively (which will be covered later), more altruistic motives might lead to improved security. Programmers aren't developing because their boss tells them to, and aren't pouring hours of their lives into problems they don't care about solving. Open-source developers code because they want to code, and are free to align with the projects they find the most important and interesting. Therefore, those concerned with security are likely to gravitate towards OpenSSL, whilst compiler-junkies (I bow to your superior skills) might target GCC. There is a worldwide market for these volunteers: anyone can join, in contrast to companies where only a few selected employees can contribute. In an atmosphere with reduced internal competition, there is a greater base of knowledge as developers look to compete against a single opponent: closed-source products. The best security-minds at Microsoft, Google and Facebook are surely brilliant, but imagine if they pooled their knowledge into a single project.
To avoid sounding like an open-source evangelist, I have also considered many factors which might lead to proprietary products being more secure. Despite the “all bugs are shallow” mantra, there are many examples to the contrary of open-source products going years before dangerous software vulnerabilities are discovered. OpenSSL both hosted Heartbleed, the most famous bug of 2014, and a serious random-number-generator issue, engineered by a mistaken Debian contributor. In the latter case, a helpful developer noticed a couple of strange lines of code and decided to comment them out, not understanding the consequences. Unfortunately this destroyed much of the entropy for the cryptographic library, reducing keyspace to just over 32,000 possibilities. With many developers contributing to a project, it can be difficult to manage responsibility; each contributor might think that someone else will check their work.
Although the guardian of GPG has received a recent windfall, many open-source projects suffer a slow death once support dries up. Millions of individuals around the world rely on secure open-source implementations managed by a handful of volunteers, and once projects die there is no-one left to issue updates. Enterprises generally gravitate towards proprietary products for the after-sale-support as much as the software itself, mitigating their risk through having someone else to target if everything goes wrong. Although the initial investment for open-source is minimal, IT managers don't want to be left defenceless in five years when contributors have moved onto new, sexier projects.
Although altruistic motivations were mentioned as a benefit, cynicism can lead one to believe that you get what you pay for. Large teams of highly-paid developers both have access to the best tools and programming environments, and a layer of architects to inspect the overall design of the implementation. Competition drives innovation and process improvement, with proprietary vendors looking to out-gun their rivals with additional functionality. Talent unfortunately can gravitate to where the money is: why would a highly-skilled developer contribute to OpenSSL for nothing when they could be making big bucks on a closed-source alternative? Even agencies like the FBI are facing this issue, watching high-quality security professionals flock to the West Coast for impressive salaries. Perhaps a hobbyist attitude might lead to amateur security?
The aforementioned “The Cathedral and The Bazaar”, an essential read in my opinion, looks to dispel the myth of individual wizards working in “blissful isolation” towards a marvellous work. Whilst most inventions are indeed made by R&D departments rather than wizened professors in their attics, at least small bands of contributors can ensure their work is aligned. Open-source products are a digital patchwork quilt, differing stitches laid by dozens of volunteers around the globe. Maintaining a single project vision, let alone a consistent coding-style, is a dream within such an environment. Whilst one contributor might possess extensive knowledge of cryptographic implementations, you cannot be sure there isn't another behind them, inadvertently removing their stitches. Variations in programming ability, preferred technologies, and security-knowledge can lead to a mishmash where the final product is “designed by committee” and doesn't meet any of the contributors' expectations. Whereas entry requirements and interviews ensure that security professionals at the top of industry possess a minimum standard, this does not apply when anyone can join in without proof of ability.
Within the security field, we are coming to terms with technology not being the whole answer: humans are generally the weakest link. Therefore, products should be usable if they are to be operated in a secure manner; poor interfaces will likely lead to end-user confusion. This is where open-source projects face a big challenge: proprietary applications generally just look “better”. Presenting a tautology, “better” generally means “standard” or “usual”, and clearly proprietary vendors like Microsoft are good at making their software look like Microsoft products. When the majority of home users make use of Windows or Mac OS, anything that deviates from this appearance causes confusion and, therefore, a level of insecurity. Furthermore, software installation poses a huge hurdle: should end-users be required to compile their own binaries and run command-line tools just to use cyberspace securely? As presented in Why Johnny Can't Encrypt, a seminal paper on security usability, average users cannot correctly utilise the software designed for them. In this example, Whitten and Tygar found that only one-third of their sample set could use PGP to sign and encrypt an email in 90 minutes. Whilst proprietary systems possess the same usability issues as open-source alternatives, they benefit from their familiarity. A secure product used incorrectly does not offer any assurances of security.
Not wishing to sit on the fence, I have a few closing comments. Skilled developers can be found everywhere, whether within industry or coding nightly from their bedrooms. Similarly, poor programmers are widespread: popular wisdom states that the best developers in a company are ten-times more productive than the worst. Furthermore, being a good software engineer should not be confused with being a skilled security practitioner: security is the mindset of looking how things break, not how they can be made to work.
Whether developing open-source or closed-source projects, one thing is essential: experience. A guru who programs all day at work and then returns home to hack his Raspberry Pi can be expected to be superior to someone who develops purely for their job. In the same manner, an individual who successfully breaks other's cryptosystems and keeps abreast of security changes is likely to perform better than one who hunts vulnerabilities from 9 to 5. Software is only as good, or secure, as the developers who make it. Regardless of whether an open-source or proprietary application is being developed, it is the sum knowledge of the contributors which define whether it will be secure or ridden with bugs.