<![CDATA[Meredydd Williams - Blog]]>Fri, 22 Jan 2016 19:57:51 -0800Weebly<![CDATA[Can we ever be at CyberPeace?]]>Tue, 28 Apr 2015 18:56:13 GMThttp://www.meredydd-williams.com/blog/can-we-ever-be-at-cyberpeace
Picture
Source: http://bit.ly/1IkgFFk, modified. Creative Commons (CC-BY-2.0).
We frequently hear talk of cyberwar: a catchy term which definitely makes cybersecurity more interesting for those outside our domain. With more critical infrastructure being moved online, this unfortunately improves remote access for both the good guys and unauthorised assailants. We have all heard these movie-plot scripts: perhaps the hackers decide to trigger a nuclear meltdown, turn off the power grid, or release dam-water upon an unsuspecting village. The press often talk of the Chinese attacking the Americans or the US infiltrating the Germans, and one could be forgiven for believing that cyberwar is upon us. It is, but only if you count every network intrusion an act of war.

Cyberwar is often defined in very broad terms, and unfortunately it tends to hinge more on the parties involved than the action itself. If a bored teenager hacks his mum's WiFi then it is barely an offence, though draconian sentencing under the US Computer Fraud & Abuse Act looks to change this. If a shady character breaks into a pensioner's online banking account then this is cybercrime and the criminal, if ever caught, would be punished to the extent of the law. However, if patriotic Chinese hackers infiltrate a US Department of Defense machine, this is suddenly cyberwar. Often actions which would be deemed no more than vandalism (website defacement) or protest (denial-of-service) are viewed very differently if the perpetrators are from rival states. Imagine that US government computers were infiltrated by civilians in the UK or Australia: it would all be put down to meddlesome kids. Change their nationality to Russian, and suddenly you have an unofficial cyber-army firing digital ammunition.

Can we truly be at peace in the digital realm? If definitions are this broad, then clearly not. Malware has existed since the 1980s and is only increasing in complexity, whilst botnets grow ever larger despite best efforts. Many countries, including the US, Israel and the UK, have developed sophisticated offensive capabilities and would not be willing to sacrifice an advantage. Less-developed nations such as North Korea see ubiquitous American Internet access as an open target, looking to capitalise on massive security holes to compete on a global scale. Funding continues to favour offensive security rather than defence, as governments stockpile weapons in an escalatory arms race. Whilst international agreements have been previously considered to limit the damage on cyberspace, many large powers do not possess the political incentives to enter into such a deal, without even considering the technological infeasibility.

New technologies change warfare forever, and it is impossible to simply reverse these developments. The invention of the trebuchet allowed stone castles to be assaulted from distance, reducing the advantage of a large fortification. Aerial warfare then rendered military strongholds largely irrelevant, as planes could drop munitions from above. The development of nuclear weapons irrevocably changed wartime strategy and split the world in two for half a century. Now, digital attacks allow aggressors to target a geographically remote location at low cost, with low risk, virtually instantaneously. Regardless of rhetoric and political posturing, no state would wish to sacrifice that ability.

Whether we, the general public, can be at "cyberpeace" depends on how integrated future technology becomes in our everyday lives. Judging from the growth of the Internet of Things, driverless cars, and blanket web connectivity, it seems unlikely. Good, sensible cybersecurity is the only option if one wants to enjoy the future benefits of cyberspace without suffering the costs.
]]>
<![CDATA[Corporations are the new states]]>Mon, 13 Apr 2015 16:04:19 GMThttp://www.meredydd-williams.com/blog/corporations-are-the-new-states
Picture
Source: http://bit.ly/1NyogEC, Creative Commons (CC-BY 2.0).
Curiously, the US government has recently refused to allow Intel to help China update the world’s biggest supercomputer. The Tianhe-2 machine has sat atop the global chart since June 2013 and boasts a performance of 33.86 petaflops, though clearly this does not sit well with the US. Apparently the Department of Commerce were worried about nuclear research being undertaken on the supercomputer, but with China being a NPT-designated nuclear weapon state (along with the US, UK, France and Russia), it is uncertain why this would be the case.

Interestingly, Intel has recently agreed to partner with the US in building a supercomputer of their own in an £136m agreement. Whilst Intel are regarded as an American multinational corporation, with listed headquarters in California, this does raise questions over a private entity’s allegiance to their host country. Whilst most are in agreement that companies should pay a fair level of tax to nations that they operate in, should these corporations have to conform to a state’s foreign policy when significant proportions of their shareholders might be from other countries? To use an extreme example, if a company with UK headquarters was 97% owned by a French consortium, then is it really British or French? 

It increasingly appears that the host country is an anachronism in this modern age, where companies can provide services online, distribute goods worldwide, and enter and exit regional markets with relative ease. In addition to nation states, corporations such as Coca Cola and McDonald’s vie in the international arena, often possessing wealth greater than small countries. Apple reached a record market capitalisation of nearly $775bn in February 2015, and as much cash on hand as New Zealand. These entities are solely focused on maximising profit and increasing their market shares, regardless of the locations that they operate in.

Whereas currently nation states can choose where their biggest actors do business, private entities, particularly in the technology sector, are becoming increasingly powerful on the world stage.
]]>
<![CDATA[The UK election and the need to fight for liberty]]>Thu, 09 Apr 2015 09:54:51 GMThttp://www.meredydd-williams.com/blog/the-uk-election-and-the-need-to-fight-for-liberty
Picture
Source: http://bit.ly/1NdKYBK, Creative Commons (CC-BY 2.0).
The whole of the UK, myself included, seems to be going mad over election season. In only 27 days, the British public will decide who will lead them for the next five years, save for any back-room deals resulting from this likely hung parliament. Whilst the main political parties are keen to stress the issues that they see as most important; including the economy, immigration, and the NHS; little thought appears to be given to technology. Whilst financial stability and healthcare are vitally important for the 64 million Britons, almost two-fifths of the population are under 30 and technology needs greater discussion. We are constantly told of the importance of the young vote, and how disappointing it is that teenagers are disengaged and disenfranchised with politics. Following a term of office which has seen the Snowden revelations, online piracy battles, and the return of Islamist extremism to social media, voters would benefit from seeing where parties stand on these key issues.

During the coalition government, we have seen Home Secretary Theresa May speak out numerous times on the importance of national security and how our Internet must support this. We have also heard David Cameron calling for tech giants to “do more” to assist law enforcement, a clear vote-winner. However, beneath the laudable rhetoric lies a hidden risk to our liberties, clearly jeopardised by our “if you have nothing to fear, you have nothing to hide” society. Whilst knee-jerk reactions can be expected in the corridors of power, especially when countering terrorism and paedophilia, we have to be careful we aren’t giving away too much too easily. 

Tech companies have responsibilities to the societies they operate in, and that is why we have laws and regulations. When governments wish to change the terms and conditions within which these firms practice, legislation is scrutinised in the House of Commons and voted on in the democratic fashion. In this way, the will of the country is represented: if a majority of publicly-elected MPs favour an amendment then one would hope this is for the good of the nation, party politics aside. However, if laws do not dictate an action, it is improper for ministers to bully private companies to comply with their demands. This is particularly true when it pertains to sharing citizen’s personal data with intelligence agencies, regardless of whether this is in the name of national security. If you want more data then propose a bill, so that the country can decide rather than a cabal of government ministers.

Rhetoric and reality often diverge: the Prime Minister has previously considered bans on online encryption and the Conservatives have promised to age-restrict pornographic sites if they win on May 7th. Both measures would be technologically infeasible, with tools such as foreign VPN services offering simple circumvention. In opposition, Ed Miliband might commit to an ethical privacy approach in his upcoming manifesto, but Labour didn’t have the best record whilst in government, introducing and updating RIPA in 2000, 2003, 2005, 2006 and early 2010. The Lib Dems fortunately appear to be contesting abuses of surveillance power, and hopefully the Green Party would take a similar stance. Worringly, the rise of UKIP could signify a step in the opposite direction, with their populist-but-popular policies likely to favour blunt law enforcement over public liberty.

National security is of great importance, and this should never be understated, especially in the current context of increased Russian nationalism and conflict in Iraq and Syria. However, British citizens have fought for centuries for the rights we have today, and these rights shouldn’t be discarded through fear and confusion. Politicians are increasingly using Twitter and social media to present to their electorate, but few truly understand technology and have the subtlety required for these new challenges. Whilst we don’t expect a cabinet full of computer scientists, greater representation from technically-literate MPs in the House of Commons would be welcome. As kids today grow up with ubiquitous Internet and 64GB iPads, hopefully this change will naturally occur over time.

Whoever you vote for on May 7th, don’t be fooled by slick rhetoric, whether on technology or any other issue. Read the manifestos. Think critically about the feasibility of election promises. Consider where funding will come from, and don’t expect billions to appear from increasing departmental efficiency. Engage with your local candidates; after all, they are the ones that represent your views in front of the country. And don’t be quick to make a decision, but challenge your own views and assumptions. Every major party is looking to get the best sound-bite, but good presentation does not equal good policy.

And for my personal, and rather unlikely, election prediction. The Conservatives will narrowly win and rely on a confidence-and-supply deal with the DUP and possibly another small party. Following this, Labour and the SNP will pass a Motion of No Confidence at the Queen’s Speech, and then proceed to govern awkwardly through another unofficial alliance. The Lib Dems will lose seats but not drastically, the SNP will colour Scotland yellow, the Greens will make modest gains, and UKIP will suffer at the hands of the First-Past-The-Post system and gain surprisingly few seats. Possibly...
]]>
<![CDATA[resistance is futile]]>Wed, 25 Mar 2015 18:05:03 GMThttp://www.meredydd-williams.com/blog/resistance-is-futile
Picture
Source: http://bit.ly/1CsD5NA, Creative Commons (CC-BY 2.0).
With technology comes great opportunities. Opportunities to undertake tasks that might have seemed impossible less than a decade ago. However, through embracing technology we also encounter the dangers that accompany it. Now as never before, we inhabit a world where virtually everything around us is “smart”, from our phones to our watches to our television sets. And as “the Internet of Things” continues to grow, this trend will not reverse.

If your PC was hit by a virus in the early 90s, this would have a limited impact on your life. Firstly, your personal computer probably wasn't coordinating your daily activities, or proving integral to your communications with others. Secondly, “ancient” malware was often quirky and light-hearted. Sure, it might cause your CD drive to repeatedly open, or even delete a few files, but it wouldn't drain your bank account. Thirdly, once you left your study, computers were out of your thoughts. However, now as our lives are lived in cyberspace, there is little sanctuary from the dangers that lurk online.

Unfortunately, if an attacker wants to steal your information they will. Whilst we may follow practical advice to install software patches, update our anti-virus programs and delete suspicious emails, well-funded actors will always find their way in. Depressing as this is, please let me explain.

Your PC might be fully updated and running the finest anti-virus software money can buy, but these applications can rarely protect against the unknown. Zero-day vulnerabilities have never been experienced before, and therefore no attack signature exists to trigger protective software. Think about it like inoculation: no matter how many vaccinations you receive, you're unlikely to be immune against a brand-new disease. More worryingly, you might already have been infected without your knowledge, with attackers currently leeching off your system. Rootkits sit deep within the machine, often evading detection for months as they perform their nefarious deeds. In some cases even wiping your operating system might be ineffective. Dangerous new firmware hacks embed code within the hard-drive itself, meaning it can withstand re-installation and continue business as usual. Scary stuff.

Of course you could disconnect your computer from surrounding networks, attempting to isolate it from all this malicious intent. “Air gapped” systems are frequently used both in the military and heavy industry to ensure viruses do not compromise the most critical of operations. However there is always another channel; often a USB stick offering a simple jump from an already-compromised office network to a fresh target. Autorun, built into systems since Windows 95, meant that malicious code could automatically execute as soon as the storage device was inserted. Although this clearly insecure situation was remediated in later versions, researchers have found novel measures to smuggle dangerous software.

Even if you're too cautious to plug a USB stick into your beloved machine, there are dozens of side-channel attacks that could convey your information. A recent paper explained how an air-gapped PC, already infected with malware, could communicate with a neighbouring infected machine. The method of communication was simple: increasing and decreasing temperature to signal patterns of 0s and 1s. Of course this does require prior compromise, and having a connected and disconnected computer side-by-side, but neither of these situations are rare. Machines can remain infected for years without their owner's knowledge, and sensitive environments often result in trusted and untrusted machines sitting adjacent to improve employee efficiency.

After hearing all these risks, you might think it sensible to lock away your computer. A modern smartphone essentially possesses the same functionality, so perhaps these are too large a risk too. But a retro 90s mobile phone would surely offer protection, especially due to their lack of functionality. Think again. The Gemalto SIM card hack showed how at risk we all are to interception, even though later reports clarified that keys were not stolen.

Now I don't wish to provoke paranoia; the spooks are not all watching us constantly, and Chinese hackers probably aren't draining your PC of data as I speak. The moral is that if people want your information badly enough, they will find a way to get it. Modern systems are so complex and interoperate with so many other systems that it is impossible to check all the scenarios. The only reason we are not constantly compromised is that the cost of the attack varies greatly with its sophistication. An intelligence agency might be willing to spend tens of thousands of pounds and several months trying to infiltrate a North Korean military base, but it isn't worth their effort to read your diary.

Therefore for most of us, there is safety in not being a low-hanging fruit. The simple actions of downloading patches and updating our anti-virus systems actually lift us above an enormous number of vulnerable machines. After all, crime still obeys economical maxims: attackers will pick off the easiest targets. And if you are important enough to be under the microscope of a highly-funded adversary, then technology offers little safety. If you write your secrets on paper and lock them in a safe, at least they aren't remotely accessible. This is no call to abandon the riches of our information society, but merely a sober warning. We do not get these riches for free.
]]>
<![CDATA[Making Passwords Even Worse]]>Wed, 04 Mar 2015 12:58:45 GMThttp://www.meredydd-williams.com/blog/making-passwords-even-worse
Picture
Source: http://bit.ly/1FlClRz, Creative Commons (CC-BY 2.0).
Passwords are always a favourite for criticism in the cybersecurity community. They are hard to remember, often dangerously weak, and generally agreed to work against what humans are good at. A password that is secure is long and comprised of many non-alphanumeric characters, and therefore challenging to memorise. These passwords are forgotten frequently, leading users to write the values down, further undermining security. A password which is easy to memorise is generally quite short, making these very simple to crack through a brute-force effort. Even passphrases comprised from personal information are at risk, with details easily accessible through social media and dictionary attacks rendering seemingly-secure passwords vulnerable. Security is again shown to be in tension with usability, and since “it will never happen to me”, password-cracking is typically one of the simplest ways to break into a system.

However, these issues are not the main subject of this post. I wish to discuss online implementations, which take an already-challenging issue and make it many times worse. Passwords, although clearly flawed, are here to stay for the foreseeable future, due to their current ubiquity and simplicity. Some applications aim to improve usability, offering password hints to jog the memory of a forgetful user. Obviously this can reduce security, but if the hint is well-chosen it should only resonate with the appropriate party, not an attacker. Unfortunately, many websites try too hard and as a result make strong passwords virtually unusable, forcing users to downgrade their security and making them targets for attack. Below I will summarise several of the major flaws.

Whilst at an ATM, it is appropriate to have my PIN number hidden; after all, any shady onlooker should be able to memorise four digits in a public place. These machines have large screens and a row of impatient customers standing behind you, so clearly obscuring the password is sensible in this case. However, we use passwords on a whole range of different devices, and in a whole range of different places. It is often hard enough see your own phone screen in sunlight, without trying to spy a glance at another’s tiny screen. Although people generally select poor passwords, they are rarely four characters in length and would also be more challenging to memorise. Whilst we use our devices on the go, a lot of computer use still takes place in the comfort of our own homes, then only having family members and pets as surveillance. The justification is security, but the need to fully repeat a mistyped password can motivate users to make shorter selections. The use of the asterisk on Windows machines might be frustrating, but the Linux command line often does not even give an indication of the number of characters typed. Hiding passwords might be appropriate in the street or the workplace, but should not be essential.

A second gripe is at backup “security questions”; the archetypical example being “what is your mother’s maiden name?”. These are intended to offer an extra layer of protection if you forget your regular password, but unfortunately just presents an easier route for intrusion. We know that people often use their favourite football team or hometown as part of their password, but posing the question just gives further assistance to an attacker. In an age of social networking promiscuity and widespread data collection, discovering a parent’s details might be no challenge whatsoever. A more secure approach would be to reply to the question with a nonsense answer, but this relies upon you remembering a different contextless “password”. Although password resets via email have their flaws, as I will discuss later, they are definitely preferable to these artificial questions.

Bafflingly, I have encountered several websites where I simply cannot use non-alphanumeric characters in my password. You want to use “$3c\/re_p@$$word”? You can’t! Despite most sites wisely advising their consumers to pick passwords from a large character set, a minority seem to be unable to process these symbols. This might be due to insufficient input sanitisation, with the administrators looking to avoid SQL injection attacks, or problems with the underlying software, but in either case it presents a problem for security. We can complain of users using substandard passwords, but sometimes they aren’t given a choice.

In discussion with colleagues several months ago, we came across the topic of login errors. One noted that despite most websites giving a generic “incorrect email/password” message, you can easily find whether an email address is registered by trying to reset the password. In an effort to increase security by confusing attackers, the vague error messages generally places a larger burden on ordinary users. Most people possess multiple email accounts: perhaps one for work, one for social media, and several that have been accumulated over the years. After all, we all have an address from our teenage MSN days like sweetiepie90@hotmail.com that seemed like a great idea at the time. Although we also reuse passwords (even though we shouldn’t), we might go through different iterations over the years. This greats a quadratic problem, where we need to match the right address and the right password, greatly increasing the effort for a forgetful user. The rationale is clear from the website owner, but usability shouldn’t always be sacrificed for security.

Naked Security cleverly highlighted the weakness of password strength testers earlier this week. Rather than employing the pen-testing technique of a dictionary attack, or seeding the cracking software with information known about you, they generally analyse the length of the phrase and character set. It should be obvious that “password123456” is less secure than “qmgsdrtztj”, despite the fact that the latter is shorter and contains no numbers. This leads users to select passwords that appear secure but might actually be vulnerable to attack. However, these testers are still better than nothing if they encourage people to use passwords longer than 8 characters.

My final problem doesn’t have a clear solution. It is inadvisable to leave all your security eggs in one basket, but we frequently do this with password reset emails. When you sign up for an account, you provide your email address in case we forget our passwords. The result of this is an accumulation of different online accounts only as strong as the one email account. If that password is compromised, attackers can browse through your emails to understand your subscriptions, navigate to those sites, and make password reset requests. Within a couple of hours, you can find yourself locked out of your own online life, including that master email once its password has been changed. Two-Factor Authentication (2FA) improves this situation, requiring an SMS code in addition to a password to access important emails. Login alerts can also help warn a user that their account has been breached, hopefully limiting the damage an attacker can inflict. Whilst it would be prudent to register using a host of different email accounts, people generally don’t work like that. The reset system is certainly preferable to cleartext password reminders and unwise security questions; just make sure your master password is very difficult to crack!

]]>
<![CDATA[Why Google's Project Zero is unfair...and why we need more]]>Tue, 24 Feb 2015 16:44:55 GMThttp://www.meredydd-williams.com/blog/why-googles-project-zero-is-unfairand-why-we-need-more
Picture
Source: http://bit.ly/1CklSa3, Creative Commons (CC BY-NC-SA 2.0).
Project Zero brings a team of talented hackers together to improve general cybersecurity and highlight bugs in popular software. Unfortunately, it also allows a Google-affiliated group to make security demands to rivals, whilst Google products are largely just as insecure.

In September, Project Zero discovered a bug in Windows 8.1, allowing a malicious user to gain administrative access. To their credit, they granted Microsoft their standard 90-day window to release a patch, but when Patch Tuesday fell only two days later, the vulnerability was published. Whilst it is important that researchers follow up on their claims, otherwise vendors have no incentive to fix the bugs, in this case the inflexibility appears unfair.

Then in October, the same team uncovered three Apple OS X zero-days, publishing these again after 90 days along with proof-of-concept exploit code. Revealing the vulnerabilities in rival’s systems might appear quite crafty, but distributing tools to break into these networks is very devious. If Google’s security was excellent then you could excuse them from targeting competitors, but the number of unfixed Android bugs undermines this stance. They have since relaxed their approach, granting vendors another 14 days if a patch is scheduled, but publishing offensive code should not be condoned.

However, the key issue here is not that Project Zero publicise vulnerabilities, nor that they shame other companies’ insecurity. Indeed, pressuring vendors to improve security is important as it barely factors in their current strategies. The problem is that Google can hurl vulnerabilities at their rivals through an affiliated group and these competitors cannot return fire. What we need are more Project Zero’s.

In a world where all tech giants possess an offensive hacking team, if bugs are responsibly disclosed, security for everyone improves. Rather than the public being held to ransom through corporate bickering, as appeared to occur in the Windows case, Microsoft could respond by highlighting the vulnerabilities in Chrome that need addressing. Security finally starts to become a competitive advantage as companies try to avoid public shaming over their insecure systems, and vendors actually invest more time testing products before they are released. There is a risk that in the blizzard of vulnerability reports, consumers might become blasé over bug announcements, but over time the average security of software should only increase.


]]>
<![CDATA[Truly "responsible" disclosure  ]]>Mon, 23 Feb 2015 11:51:24 GMThttp://www.meredydd-williams.com/blog/truly-responsible-disclosure
Picture
Source: http://bit.ly/1CsJwA7, modified. Creative Commons (CC-BY 2.0).
Vulnerability disclosure is a contentious subject in cybersecurity. Clearly we can only fix the bugs we know about, and bugs are always going to exist, but does advertising these vulnerabilities incentivise the vendors or simply assist the attackers? Within this article, I will compare the existing disclosure models and argue that whilst full disclosure offers many benefits, it is at risk of being seen as blackmail.

Bugs are always going to happen. This is not because software developers are particularly shoddy (though I'm sure a minority are), but due to the complexity of writing secure applications. As Turing Award laureate Dijkstra famously commented, “testing shows the presence, not the absence of bugs”, and vendors have a clear incentive to release products as soon as possible, sacrificing laborious testing processes. Now we live in societies that critically rely on these vulnerable systems, and therefore security professionals must develop patches to cover these holes.

Traditionally, “zero disclosure” was the norm. Following the unwise tenet of “security through obscurity”, vendors believed that their software would never be exploited if the weaknesses were kept hidden. Therefore, if a security researcher uncovered such a bug and contacted the developers, they would receive a stock answer. After all, if only one person knows about this vulnerability, then why should the company devote valuable resources to fix the issue?

This approach became more litigative as vendors tried to silence researchers through the courts. Publishing bugs gave this information to malicious attackers, therefore placing the developers in jeopardy. However, the publication also assisted security professionals, who could develop their own patches and further their knowledge of the fast-paced field. Furthermore, benefits from zero disclosure relied on the assumption that attackers did not already know the vulnerability. All this stance created was misaligned incentives for software vendors, who often promised fixes but never had reason to follow up.

This led to “responsible disclosure”: a title that should not be taken wholly on face value. In this case, researchers contact vendors first when a vulnerability is found. Then, after a reasonable amount of time, the academic publishes the bug to ensure the developers are incentivised to release a patch. It appears that everyone wins: vendors get a helpful head-start before publication, security professionals receive acknowledgement for their work, and the public benefit from improved software. This further led to the idea of “bug bounties”, offered by tech giants such as Google and Facebook, where researchers are paid for finding critical vulnerabilities and informing the developers. By crowdsourcing bug detection, these vendors get relatively-cheap improvements to security, whilst contributors receive acknowledgement and financial reward.

However, it is the scale of this financial reward that continues to present misaligned incentives. Cyberspace has a thriving black market for vulnerabilities and software exploits, since the compromise of many systems can lead to considerable financial gain. It is not uncommon to find zero-day bugs being sold at $200,000, due to the assurance that the hack is virtually guaranteed to succeed. Beyond trusting one's conscience (generally a naive mistake), why would a talented researcher sell their vulnerability to Google for $20,000, when they can make 10x more in the darker corners of the Internet? Why wouldn't they sell it twice, with neither the attackers nor the vendors knowing of the other transaction? We are all dependent on software applications in our daily lives, and it is us that usually feel the consequences of a major system breach. When tech giants make net incomes in excess of $4bn a quarter, shouldn't a critical vulnerability be rewarded more generously?

The most extreme approach is “full disclosure”. In this case, the researcher publishes the vulnerability online without notifying the vendor first. Therefore, this forces the developer's hand, incentivising them to immediately begin working on a fix, as malicious actors are aware of the bug. In some cases, proof-of-concept exploits are also published, meaning that any script-kiddie can download the source and begin targeting the software. Clearly, vendors dislike this approach, and I do have sympathy with their situation. Many subscribe to the hacker ethic that “information should be free”; indeed, it virtually sounds like a fundamental right. If a researcher discloses a bug to a vendor and the developer does nothing, then both parties are aware that the software, and those using it, are vulnerable. Through freedom of speech, advertising this fact to the public is not a crime: it is just like shouting through a megaphone that your neighbour's door is unlocked. However, the distribution of exploit tools is different. This is like firstly proclaiming the unlocked door, and then handing out copied keys so thieves can break in. Whilst the moral high-ground can be argued in First Amendment cases, this breaks down when you actively publish exploits.

The key issue here is the term “responsible”, as in “responsible disclosure”. This implies a responsibility from the researcher to the vendor in disclosing the vulnerability. Why would they have this responsibility? Whilst bugs are clearly difficult to locate, it is the developers which released insecure software, and academics have no explicit responsibility to notify them before publishing their research. The true responsibility that security professionals have is to improve the state of the field, whether through furthering cybersecurity knowledge or fixing insecure implementations. If a researcher knows that a vendor won't fix their software, therefore leaving millions of citizens vulnerable to attack, don't they have a responsibility to publish?

I do have some sympathy with software houses. It is easy to look at multi-billion tech giants and accuse them of insecurity, when we all know how challenging cybersecurity actually is. Companies have deadlines, milestones and roadmaps: this is why Microsoft schedules “Update Tuesday” on the second Tuesday of each month. Vulnerabilities are not trivial to fix, and knowing about them doesn't mean that changes are simple. Critical bugs might require architectural alterations, requiring hundreds of man-hours of effort and extensive testing before the patch can be released. Full disclosure randomly drops a vulnerability into the developer's lap and expects a fix immediately. Tech giants are likely to have some of the most talented programmers in the world, and security researchers know how difficult it is to make an implementation bullet-proof. It is disingenuous to expect developers to patch the bug before nimble attackers capitalise on the vulnerability, particularly when researchers know the complexity of the code. Releasing a proof-of-concept attack tool borders upon blackmail: forcing a company to fix their software immediately or they will be attacked. If it takes you several days to find a bug and write exploit code, you cannot expect a patch overnight.

Truly “responsible disclosure” therefore needs to conform to the following points:
  • The vendor to be contacted before release, and given ample time to fix the bug.
  • The vulnerability to be published online to ensure the vendor follows up on their promises.
  • Proof-of-concept exploits should not be published. Talented attackers can develop these tools themselves, and there is no justification for simply lowering the bar for script-kiddies.
  • “Bug bounty” rewards need to be increased to incentivise researchers to act morally.

Cybersecurity is a rapidly-growing field and more important now than ever before. If we wish to live in a secure future, we need more security professionals finding more vulnerabilities to produce more-secure software. Academics deserve recognition for finding the bugs that the developer's left in: in what other industry could a company release unsafe products and then refuse to fix them when informed by experts. Incentives need to be correctly-aligned to ensure that professionals search for bugs, find them, and then report them appropriately to benefit the public rather than attackers.

 
]]>
<![CDATA[Is Open-Source more secure?]]>Thu, 19 Feb 2015 11:20:43 GMThttp://www.meredydd-williams.com/blog/is-open-source-more-secure
Picture
Source: http://bit.ly/1Jrzp4f, Creative Commons (CC-BY-NC-SA 2.0).
Open-source: a topic as divisive as religion or politics. Its proponents claim that it produces better quality and that software deserves to be free (as in “freedom”, rather than in delicious “free beer”). Its critics, however, respond that hobbyists cannot manage projects with the same dedication as professionals, and that you get what you pay for. This debate got me thinking: should open-source systems be more secure?

Firstly, for the advantages. Eric S. Raymond's magnum opus, The Cathedral and The Bazaar, claims that “given enough eyeballs, all bugs are shallow”. Termed Linus's Law, it rationalises that since open-source projects have dozens of people scrutinising and improving each other's work, that software vulnerabilities will be removed. Furthermore, opening source-code to the public avoids a tendency towards “security by obscurity”, where the secrecy of the code is the only factor preventing a successful attack. On face value, hiding the design of a program might appear rational, but complacency leads to products which resist attacks only through ignorance. Once the keys to the castle are reverse-engineered, all bets are off. We often see vendors release proprietary-products claimed to be secure, but once the smoke and mirrors are removed, the implementation is usually flawed.

Arguably, those dedicated to open-source projects are devoted to programming, and therefore better-quality developers. Clearly you would expect individuals that work all day before returning home to revel in late-night development to have a greater passion for projects, especially with the removal of high-pressure deadlines and your manager looking over your shoulder. Programming purely for joy and reputation, open-source developers likely put greater effort into their work, which is found when people love what they do. This added care and attention surely has benefits for software security, as bugs can be located easily free from the pressure of a work-like environment.

Within the proprietary world, we must wait until the second Tuesday of the month for updates. In the open-source community, patches can be constantly released for those who wish to receive them. Once a dangerous vulnerability is detected it can be removed, with users immediately experiencing benefits rather than waiting in limbo for several weeks. With the removal of corporate bureaucracy and company deadlines, any contributor who spots an issue can submit a fix. This is in stark contrast to the commercial world, where security professionals must pressure vendors to release patches through the threat of vulnerability disclosure. Even if alterations do not proceed the way you wish, you still have an option: fork the project. Many secure product-variations have been developed, such as TrustedBSD, which attracts skilled developers to its cause.

There is no financial or commercial motivation, and whilst this can be viewed negatively (which will be covered later), more altruistic motives might lead to improved security. Programmers aren't developing because their boss tells them to, and aren't pouring hours of their lives into problems they don't care about solving. Open-source developers code because they want to code, and are free to align with the projects they find the most important and interesting. Therefore, those concerned with security are likely to gravitate towards OpenSSL, whilst compiler-junkies (I bow to your superior skills) might target GCC. There is a worldwide market for these volunteers: anyone can join, in contrast to companies where only a few selected employees can contribute. In an atmosphere with reduced internal competition, there is a greater base of knowledge as developers look to compete against a single opponent: closed-source products. The best security-minds at Microsoft, Google and Facebook are surely brilliant, but imagine if they pooled their knowledge into a single project.

To avoid sounding like an open-source evangelist, I have also considered many factors which might lead to proprietary products being more secure. Despite the “all bugs are shallow” mantra, there are many examples to the contrary of open-source products going years before dangerous software vulnerabilities are discovered. OpenSSL both hosted Heartbleed, the most famous bug of 2014, and a serious random-number-generator issue, engineered by a mistaken Debian contributor. In the latter case, a helpful developer noticed a couple of strange lines of code and decided to comment them out, not understanding the consequences. Unfortunately this destroyed much of the entropy for the cryptographic library, reducing keyspace to just over 32,000 possibilities. With many developers contributing to a project, it can be difficult to manage responsibility; each contributor might think that someone else will check their work.

Although the guardian of GPG has received a recent windfall, many open-source projects suffer a slow death once support dries up. Millions of individuals around the world rely on secure open-source implementations managed by a handful of volunteers, and once projects die there is no-one left to issue updates. Enterprises generally gravitate towards proprietary products for the after-sale-support as much as the software itself, mitigating their risk through having someone else to target if everything goes wrong. Although the initial investment for open-source is minimal, IT managers don't want to be left defenceless in five years when contributors have moved onto new, sexier projects.

Although altruistic motivations were mentioned as a benefit, cynicism can lead one to believe that you get what you pay for. Large teams of highly-paid developers both have access to the best tools and programming environments, and a layer of architects to inspect the overall design of the implementation. Competition drives innovation and process improvement, with proprietary vendors looking to out-gun their rivals with additional functionality. Talent unfortunately can gravitate to where the money is: why would a highly-skilled developer contribute to OpenSSL for nothing when they could be making big bucks on a closed-source alternative? Even agencies like the FBI are facing this issue, watching high-quality security professionals flock to the West Coast for impressive salaries. Perhaps a hobbyist attitude might lead to amateur security?

The aforementioned “The Cathedral and The Bazaar”, an essential read in my opinion, looks to dispel the myth of individual wizards working in “blissful isolation” towards a marvellous work. Whilst most inventions are indeed made by R&D departments rather than wizened professors in their attics, at least small bands of contributors can ensure their work is aligned. Open-source products are a digital patchwork quilt, differing stitches laid by dozens of volunteers around the globe. Maintaining a single project vision, let alone a consistent coding-style, is a dream within such an environment. Whilst one contributor might possess extensive knowledge of cryptographic implementations, you cannot be sure there isn't another behind them, inadvertently removing their stitches. Variations in programming ability, preferred technologies, and security-knowledge can lead to a mishmash where the final product is “designed by committee” and doesn't meet any of the contributors' expectations. Whereas entry requirements and interviews ensure that security professionals at the top of industry possess a minimum standard, this does not apply when anyone can join in without proof of ability.

Within the security field, we are coming to terms with technology not being the whole answer: humans are generally the weakest link. Therefore, products should be usable if they are to be operated in a secure manner; poor interfaces will likely lead to end-user confusion. This is where open-source projects face a big challenge: proprietary applications generally just look “better”. Presenting a tautology, “better” generally means “standard” or “usual”, and clearly proprietary vendors like Microsoft are good at making their software look like Microsoft products. When the majority of home users make use of Windows or Mac OS, anything that deviates from this appearance causes confusion and, therefore, a level of insecurity. Furthermore, software installation poses a huge hurdle: should end-users be required to compile their own binaries and run command-line tools just to use cyberspace securely? As presented in Why Johnny Can't Encrypt, a seminal paper on security usability, average users cannot correctly utilise the software designed for them. In this example, Whitten and Tygar found that only one-third of their sample set could use PGP to sign and encrypt an email in 90 minutes. Whilst proprietary systems possess the same usability issues as open-source alternatives, they benefit from their familiarity. A secure product used incorrectly does not offer any assurances of security.

Not wishing to sit on the fence, I have a few closing comments. Skilled developers can be found everywhere, whether within industry or coding nightly from their bedrooms. Similarly, poor programmers are widespread: popular wisdom states that the best developers in a company are ten-times more productive than the worst. Furthermore, being a good software engineer should not be confused with being a skilled security practitioner: security is the mindset of looking how things break, not how they can be made to work.

Whether developing open-source or closed-source projects, one thing is essential: experience. A guru who programs all day at work and then returns home to hack his Raspberry Pi can be expected to be superior to someone who develops purely for their job. In the same manner, an individual who successfully breaks other's cryptosystems and keeps abreast of security changes is likely to perform better than one who hunts vulnerabilities from 9 to 5. Software is only as good, or secure, as the developers who make it. Regardless of whether an open-source or proprietary application is being developed, it is the sum knowledge of the contributors which define whether it will be secure or ridden with bugs.
]]>
<![CDATA[Why HTML5 is good cybersecurity]]>Sat, 07 Feb 2015 18:19:19 GMThttp://www.meredydd-williams.com/blog/why-html5-is-good-cybersecurity
Picture
Source: http://bit.ly/1yY327k, modified. Creative Commons (CC-BY 2.0).
Cybersecurity is often an annoyance. Cybersecurity is what delays us from visiting the websites we like, forces us to enter passwords repeatedly, and scorns us from curiously opening that email. It is a secondary goal: rarely do we perform actions solely for the purposes of being secure. Far more often security is "bolted-on" to other processes as a safety barrier, reducing usability rather than improving productivity.

This doesn't have to be so. Security only damages usability when users must vault the hurdles; invisible decisions made for their benefit only improves the overall experience. Remembering the username and password details for your favourite website is frustrating, and although the security rationale is clear, we still resent the five-second delay whilst we enter our credentials. In contrast, TLS/SSL simply works in the background. Through using a web browser with a host of certificates, simply visiting a HTTPS website sets off a flurry of invisible protocol traffic which results in a secure session. The average user requires no knowledge of how the system works, or even that anything is happening; most are simply reassured by the "lock icon" at the bottom of their screens. By reducing the cognitive workload for the user, customers gain the ability to buy products and manage their finances online; a situation that would be impossible without this technology.

This presents how cybersecurity can be an enabler rather than simply a barrier. I was reminded of this myself several days ago when I registered for a free trial of an online streaming service. Unfortunately, it appeared that Silverlight was required to play movies and so, against my best judgment, I downloaded the software to watch a few films. Being slightly knowledgeable of cybersecurity matters, I am well aware of the dangers that come through such plug-ins, whether they be Java, Flash, or a host of similar technologies. Less than two days later whilst casually browsing the news, my browser begins bringing up random websites, thankfully most blocked by anti-virus software which detected malware on the page. One lapse is all it took: in looking for convenience, I had sacrificed security.

This is why developments such as HTML5 are so important. People resent downloading software, keeping it updated and managing it along with the other five plug-ins that other sites require. Although heterogeneity can be beneficial in some situations, the fragmentation of the media player market led to a proliferation of tools required for basic web use, many with less-than-perfect security records. HTML5 looks to change this. People aren't required to keep any software updated - it just works. Security is improved by removing the dependence on vulnerable applications, and usability is enhanced by reducing user workload. 

Whilst risks still exist for HTML5 - it is no silver bullet, technology never is - solutions which marry usability and security should be encouraged if we wish to be both productive and safe in cyberspace. 
]]>
<![CDATA[Cybersecurity schizophrenia]]>Sat, 31 Jan 2015 18:14:24 GMThttp://www.meredydd-williams.com/blog/cybersecurity-schizophrenia
Picture
Source: http://bit.ly/1JrBj56 and at gotcredit.com, modified. Creative Commons (CC-BY 2.0).
Those within the corridors of power argue that cyber security is essential, that the health of our economy is dependent on UK businesses being protected. Evidence of the government's new drive can be clearly seen upon the London Underground and online, through the Cyber Streetwise campaign. This is indeed true, companies of all sizes lose millions of pounds each year due to security incidents, whether these are data breaches, malware infestations, or phishing scams. It is refreshing for researchers to hear that cyber security has been given such great importance, especially as the risks of attacks are so frequently underestimated by those companies not peddling anti-virus products.

However, in another case of the heads of the Whitehall Hydra not acting in unison, David Cameron has recently proposed for data encryption to be broken in order to assist law enforcement and secret intelligence efforts. Whilst it is true that the government has the right to open your physical mail in transit (in special circumstances), implying that forces are prevented from pursuing cases due to encryption is disingenuous. At the other side of the puddle, the National Security Agency (NSA) also appears misaligned with the US National Intelligence Council: the former wishing access to more data, while the latter emphasises the importance of encryption within a Snowden-released 2009 document.

Let me repeat an oft-repeated fact: a backdoor is a backdoor, and a vulnerability is a vulnerability. We have enough implementation errors and design flaws within software when we try and make it bullet-proof; purposely adding security holes into widely-deployed applications is just a bad idea. Once malicious parties understand that software vendors must comply with backdoors to trade effectively, the race is on to find the vulnerability and steal the data. We might sleepwalk into a situation where law enforcement and criminal groups both have access to our personal information, hence leaving the balance unchanged at the cost of our civil liberties.

In essence, you cannot have your cookies and eat them too. We have been trying to make software more secure, more robust, and more reliable for decades, bemoaning that no "silver bullet" exists to solve our woes. What we certainly do not need is to work in the opposite direction, all in the faint hope that the "good guys" will be the only ones intelligent enough to exploit the vulnerabilities. If the intelligence agencies truly have that current advantage, then they shouldn't require everyone else to weaken their security.
]]>