Photo by Markus Spiske on Unsplash

The Reason Cyber Security is Hard

A version of this article originally appeared in RUSI Defense Systems Vol.21. №1.

Complexity is Cheaper than Simplicity

In most industries simplicity is cheaper than complexity; streamlining processes and shedding excess reduces the cost of doing business. Within the technology sector, there is a trend to accomplish the same goal by doing the opposite. This increased complexity has a devastating effect on security.

The UK’s digital economy contributes one-eighth to the nation’s GDP, a higher proportion than any other G20 nation. According to the Government’s Cyber Security Breaches Survey 2018, the digital economy is under attack; 43% of UK businesses reported at least one cyber breach in the previous year. 72% of large companies reported a cyber-attack in the previous year, with 9% reporting multiple attacks per day. Protecting the UK’s digital economy and electronic infrastructure is a national security priority. The Government faces significant challenges in fighting the adapting threat and fixing weaknesses in its own National Cyber Security Strategy.

Cyber Security Breaches Survey 2018, UK Government

Improving cyber security means combatting strong economic and psychological factors within software development companies, freelance programmers, digital businesses and normal users. Industry and Government are tackling the problem of security from a user point of view; educating users and employees of businesses with programs such as Cyber Essentials, the UK’s Government-backed education scheme which bestows one of two Cyber Essentials badges upon a company to showcase their cyber safety online. The focus of industry and Government attempts to improve security should broaden to include all of the areas involved in software development, not just the use of it. Trades professionals, such as gas technicians, face a legal requirement to hold specific certifications to prove their competence in the form of Gas Safe certificates. Should the future of a more secure digital nation a similar scheme for software developers, and a scheme for the software companies themselves?

Programmers often lament how different programs they have written would be if they were able to delete it and start again. In most cases, programmers do not have the opportunity to implement what they have learned while tackling the often-unique challenges associated with a project; rather they often find themselves making ad hoc fixes and jury-rigging solutions as many people work on the same project, the direction of which changes as deadlines near. Each workaround, fix and new feature increases the surface area of the program to test, make it more difficult to understand and ultimately create bugs which may be exploitable. Starting again from scratch is not feasible for commercial software development, so it does not happen. It would, however, produce higher quality and more secure software. This iterative software development lifecycle, where new versions are simply built upon new versions can be seen in Microsoft’s line of products. Windows 10, 8, 7, and XP — the major releases in the past 20 years were all developed iteratively from one another and from their parent, Windows NT, an operating system from 1993. As programmers come and go the underlying code of a project becomes more disorganised and bloated, increasing the likelihood of introducing errors, or bugs, which could affect security.

A clear example of an increase in complexity over time is seen in the architecture design of Central Processing Units (CPUs). Thirty years ago, CPUs were relatively simple, combining a number of individual specialised units allowing different functions. Instructions to move data, perform addition and subtraction, and other relatively basic tasks. Despite not looking basic, the image below shows the internals of Intel’s Netburst Willamette 180nm architecture from 2000. A single core with 42 million transistors on a 217mm² die.

Intel’s Netburst Willamette 180nm architecture from 2000 (wikichip.org)

Today, CPUs are incredibly complex, incorporating multiple computation units, graphical processing units and even units dedicated to niche processes such as machine learning computation. These modern chips have been built by iterative development from those original chip designs. The image below is 17 years of development from the previous image; Intel’s 2017 Coffee Lake Hexa-core 14nm processor on a 150mm² die. In the first image you can decern connections between units and if you’re familiar with processor design work out which areas do what. In the image below it is much more difficult, due to the increased complexity from two decades of transistor shrinkage, additional computational units and more complex designs.

Intel’s Coffee Lake 14nm architecture from 2017 (wikichip.org)

Intel suffered at the hands of increased complexity in January 2018 when two groups of attacks called ‘Spectre’ and ‘Meltdown’ were revealed. CPUs have become so complex and have so many built-in instructions, prediction algorithms are used to guess which instructions will be needed next by the user. The algorithms retain these guessed instructions in the memory to be called immediately, rather than searched for when the user needs them. The implementation of the prediction algorithms was flawed and allowed attackers to view the contents of computer memory it should not have access to. All pre-2019 Intel processors made since 1995 are susceptible to Meltdown. Spectre affects all of these Intel processors and chips from other manufacturers; AMD, and ARM.

Combined, these vulnerabilities equate to billions of vulnerable devices with embedded Intel, AMD or ARM CPUs; from personal computers and mobile phone to servers and supercomputers. Hardware vulnerabilities are much harder to fix than software vulnerabilities, some hardware is not designed to be patched physically and cannot be patched remotely. Consider industrial sensors or some smart devices in the home; they are not intended to be patched remotely — they may not even physically be capable of being patched.

Security is not a top priority for many technology companies, features are. Competition is fierce within the technology industry, and the time it takes to get products and new features to market have a significant effect on numerous key performance indicators (KPI), including market position, revenue and profit. Fast-paced and iterative project release schedules have become the norm over traditional slower-paced, sequential methods which build products step-by-step and release when all the work is complete. So-called ‘Agile’ methodologies employ a series of ‘sprints’ to allow features to be released as they are finished, rather than wait until they are all finished. With important KPIs including profit on the line, and competitors moving to Agile methods, there is an incentive to implement these fast-paced frameworks. By their nature they demand teams advance to the next objective often diminishing the amount of time set aside for security, or entirely pushing security considerations aside.

Software and hardware testing are time-consuming and may be conducted at the end of a project. This introduces incentives not to test security or to only test minimal requirements; easy to find security issues and areas which the user will interact with most. Concepts such as Test-Driven Development (TDD) attempt to create a workflow which incorporates testing into the development of code, essentially, testing as you go. However, the real-world adoption and effectiveness of these methods are unclear. TDD and similar methods require programmers to change and learn a new workflow, a difficult task easily abandoned. These issues push security to become an afterthought or tacked-on.

Some companies appear oblivious or hostile to the additional cost of security. Security expert Brian Krebs has written about the complacency of Chinese technology companies consistently ignoring warnings after their products are used by criminals and nation states. Threat actors take control of unsecured devices to pollute the internet and harm users; propagating viruses or becoming bots within botnets. Companies, such as Huawei, have realised the small additional cost of security outweighs the damage to their reputation and effects on current and future contracts with western companies.

Flaws in software are down to the programmer’s ability to understand how the code works and to test it effectively. Security education amongst programmers is a little-studied area, however, recent research has highlighted a lack of security understanding within freelance programmers. Researchers from the University of Bonn in Germany studied 43 programmers with an average of six years of experience from Freelance.com who each wrote a program to store passwords safely. Of the 43 programmers, 18 initially submitted code which stored programs in plain-text. Eight presented a program which stored passwords in an encoded format, simply a different way for computers to represent data, equivalent to plain-text as it can be easily reverted to a human-readable format. The remaining 17 used a hash function, a form of encryption of varying security. The issue of encryption
comes up repeatedly when data breaches are revealed; user data is readable where it could be easily encrypted, unreadable and difficult to break.

While software is made insecurely, any action will treat the symptoms rather than the cause.

Kris Bolton is an MSc student of Cyber Security at the University of Wolverhampton’s Cyber Research Institute, he has a background (BSc) in Computer Science. His research interests include advanced persistent threats, cyber defence, offensive cyber and artificial intelligence. Connect with Kris on LinkedIn.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Kris Bolton

Kris Bolton

MSc Cyber Security at University of Wolverhampton’s Cyber Research Institute.