Bugs are always going to happen. This is not because software developers are particularly shoddy (though I'm sure a minority are), but due to the complexity of writing secure applications. As Turing Award laureate Dijkstra famously commented, “testing shows the presence, not the absence of bugs”, and vendors have a clear incentive to release products as soon as possible, sacrificing laborious testing processes. Now we live in societies that critically rely on these vulnerable systems, and therefore security professionals must develop patches to cover these holes.
Traditionally, “zero disclosure” was the norm. Following the unwise tenet of “security through obscurity”, vendors believed that their software would never be exploited if the weaknesses were kept hidden. Therefore, if a security researcher uncovered such a bug and contacted the developers, they would receive a stock answer. After all, if only one person knows about this vulnerability, then why should the company devote valuable resources to fix the issue?
This approach became more litigative as vendors tried to silence researchers through the courts. Publishing bugs gave this information to malicious attackers, therefore placing the developers in jeopardy. However, the publication also assisted security professionals, who could develop their own patches and further their knowledge of the fast-paced field. Furthermore, benefits from zero disclosure relied on the assumption that attackers did not already know the vulnerability. All this stance created was misaligned incentives for software vendors, who often promised fixes but never had reason to follow up.
This led to “responsible disclosure”: a title that should not be taken wholly on face value. In this case, researchers contact vendors first when a vulnerability is found. Then, after a reasonable amount of time, the academic publishes the bug to ensure the developers are incentivised to release a patch. It appears that everyone wins: vendors get a helpful head-start before publication, security professionals receive acknowledgement for their work, and the public benefit from improved software. This further led to the idea of “bug bounties”, offered by tech giants such as Google and Facebook, where researchers are paid for finding critical vulnerabilities and informing the developers. By crowdsourcing bug detection, these vendors get relatively-cheap improvements to security, whilst contributors receive acknowledgement and financial reward.
However, it is the scale of this financial reward that continues to present misaligned incentives. Cyberspace has a thriving black market for vulnerabilities and software exploits, since the compromise of many systems can lead to considerable financial gain. It is not uncommon to find zero-day bugs being sold at $200,000, due to the assurance that the hack is virtually guaranteed to succeed. Beyond trusting one's conscience (generally a naive mistake), why would a talented researcher sell their vulnerability to Google for $20,000, when they can make 10x more in the darker corners of the Internet? Why wouldn't they sell it twice, with neither the attackers nor the vendors knowing of the other transaction? We are all dependent on software applications in our daily lives, and it is us that usually feel the consequences of a major system breach. When tech giants make net incomes in excess of $4bn a quarter, shouldn't a critical vulnerability be rewarded more generously?
The most extreme approach is “full disclosure”. In this case, the researcher publishes the vulnerability online without notifying the vendor first. Therefore, this forces the developer's hand, incentivising them to immediately begin working on a fix, as malicious actors are aware of the bug. In some cases, proof-of-concept exploits are also published, meaning that any script-kiddie can download the source and begin targeting the software. Clearly, vendors dislike this approach, and I do have sympathy with their situation. Many subscribe to the hacker ethic that “information should be free”; indeed, it virtually sounds like a fundamental right. If a researcher discloses a bug to a vendor and the developer does nothing, then both parties are aware that the software, and those using it, are vulnerable. Through freedom of speech, advertising this fact to the public is not a crime: it is just like shouting through a megaphone that your neighbour's door is unlocked. However, the distribution of exploit tools is different. This is like firstly proclaiming the unlocked door, and then handing out copied keys so thieves can break in. Whilst the moral high-ground can be argued in First Amendment cases, this breaks down when you actively publish exploits.
The key issue here is the term “responsible”, as in “responsible disclosure”. This implies a responsibility from the researcher to the vendor in disclosing the vulnerability. Why would they have this responsibility? Whilst bugs are clearly difficult to locate, it is the developers which released insecure software, and academics have no explicit responsibility to notify them before publishing their research. The true responsibility that security professionals have is to improve the state of the field, whether through furthering cybersecurity knowledge or fixing insecure implementations. If a researcher knows that a vendor won't fix their software, therefore leaving millions of citizens vulnerable to attack, don't they have a responsibility to publish?
I do have some sympathy with software houses. It is easy to look at multi-billion tech giants and accuse them of insecurity, when we all know how challenging cybersecurity actually is. Companies have deadlines, milestones and roadmaps: this is why Microsoft schedules “Update Tuesday” on the second Tuesday of each month. Vulnerabilities are not trivial to fix, and knowing about them doesn't mean that changes are simple. Critical bugs might require architectural alterations, requiring hundreds of man-hours of effort and extensive testing before the patch can be released. Full disclosure randomly drops a vulnerability into the developer's lap and expects a fix immediately. Tech giants are likely to have some of the most talented programmers in the world, and security researchers know how difficult it is to make an implementation bullet-proof. It is disingenuous to expect developers to patch the bug before nimble attackers capitalise on the vulnerability, particularly when researchers know the complexity of the code. Releasing a proof-of-concept attack tool borders upon blackmail: forcing a company to fix their software immediately or they will be attacked. If it takes you several days to find a bug and write exploit code, you cannot expect a patch overnight.
Truly “responsible disclosure” therefore needs to conform to the following points:
- The vendor to be contacted before release, and given ample time to fix the bug.
- The vulnerability to be published online to ensure the vendor follows up on their promises.
- Proof-of-concept exploits should not be published. Talented attackers can develop these tools themselves, and there is no justification for simply lowering the bar for script-kiddies.
- “Bug bounty” rewards need to be increased to incentivise researchers to act morally.
Cybersecurity is a rapidly-growing field and more important now than ever before. If we wish to live in a secure future, we need more security professionals finding more vulnerabilities to produce more-secure software. Academics deserve recognition for finding the bugs that the developer's left in: in what other industry could a company release unsafe products and then refuse to fix them when informed by experts. Incentives need to be correctly-aligned to ensure that professionals search for bugs, find them, and then report them appropriately to benefit the public rather than attackers.