Hardly a week – if not day – seems to go by without a serious vulnerability emerging in the systems used by the government, businesses and individuals. Take two recent examples:
- Yesterday, computer security researchers revealed how to exploit flaws in the software that helps devices connect to computers via USB (the ‘BadUSB’ flaw).
- On 24 September, vulnerabilities were announced in the computer program ‘Bash’. Bash is a standard program installed on most machines running non-Windows operating systems including, but not limited to, Unix, Linux, MacOS. It is used to process commands. The ‘Shellshock’ or ‘Bashdoor’ vulnerability can allow an attacker to execute arbitrary commands on vulnerable systems and, in doing so, gain unauthorised access to computers and web servers. (Incidentally, Surevine, a member of ADS, has provided an example of how quickly this can be done).
The BadUSB flaw is interesting, because researchers publicly released a code that can be used to exploit the vulnerabilities. They did so to encourage firms to improve their defences:
Full disclosure can motivate companies to act and make products more secure.
In the case of BadUSB, however, the problem is structural. The standard itself is what enables the attack and no single vendor is in a position to change that.
It is unclear who would feel pressured to improve their products by the recent release.
In a slightly different way, the FBI has released a formerly in-house malware-analysing portal to help speed up incident responses and help industry and law enforcement with investigations. The aims is for the Malware Investigator Portal to let organisations report concerns, receive responses and advice, correlate the incident with other intelligence and, through this, build responses to new malware without such heavy reverse-engineering loads. Perhaps the UK’s National CERT (including its Cyber Information Sharing Partnership, CISP) will evolve in a similar direction?
Shellshock is also interesting, for two reasons. First, analysis of the source code history identified that the vulnerabilities had existed since 1992. Secondly, in the days after Shellshock, scrutiny of the underlying design flaws was revealed in turn discovered a variety of derivative vulnerabilities which required separate patches to be developed.
The approach to addressing vulnerabilities is combination of reactive patching and the use of technical and procedural “controls”. For example, the UK Government has now decided to mandate the ‘Cyber Essentials’ scheme in procurement.
Cyber Essentials specifies the basic cyber security measures all organisations should implement to mitigate the risk from basic internet based threats. From 1 October, suppliers bidding for government contracts which involve handling of personal information or the provision of IT systems and services at OFFICIAL must demonstrate that they meet the requirements set out in Cyber Essentials. (Note that suppliers on the G-Cloud, Digital Services, Public Sector Network, ID Assurance and Assisted Digital Frameworks are exempt as they already meet security requirements. Suppliers to the MoD are also exempt, as the MoD will be implementing its own cyber standards in 2015 (through the Defence Cyber Protection Partnership, which ADS is involved in).
ADS provided input to Cyber Essentials, raising a number of concerns about the Scheme’s adequacy and implementation. It will be interesting to see how the Scheme helps address vulnerabilities such as Shellshock and BadUSB.
An alternative, complementary and perhaps more fundamental approach, as I recommended in December 2012, is to design security and resilience into software (and hardware) from the outset. That is why initiatives such as the Trustworthy Software Initiative, which is part funded by the government, are so interesting and valuable. The TSI rightly notes that:
Software provides a vital underpinning to the information economy, yet software errors and failures are endemic, and they show no sign of decreasing. Recent reports indicate that in excess of 90% of the incidents reported to GovCERTUK can be attributed to software bugs.
As the number of software-powered devices continues to increase, the number of software errors will also increase, even if all other factors remain the same.
It aims to ‘enhance the overall software and systems culture, with the objective that software should be designed, implemented and maintained in a trustworthy manner‘ and has already launched the PAS754 “software trustworthiness” standard to help organisations avoid software failures. The standard is the UK’s first successful attempt at codifying what constitutes good software engineering, and sets out the processes and procedures which organisations can apply to “help them procure, supply or employ trustworthy software.
However, market dynamics could hinder the development of secure software (and hardware) in three ways:
- The pressure of time-to-market: integrated security solutions tend to have longer production cycles.
- Cost: ‘designing out’ vulnerabilities rather than patching them at a later date is not only more time-consuming but more expensive. A market has developed in the provision of overlaid security solutions that provides value for money, even if it does not address all the risks faced by organisations.
- From the point of view of those assessing risk at board level, patching and security controls are supported by manuals of best practice and include training and other practical activities designed to demonstrate that “something is being done” (even if just to check boxes!). The TSI is taking steps to address this particular aspect, and PAS754 is a good start, but there is some way to go.
Will consumer pressure eventually help push vendors to develop secure software and hardware?