Five years ago, cybersecurity researchers accomplished a rare feat. A team at the Pentagon's far-out research arm, the Defense Advanced Research Projects Agency (DARPA), loaded special software into a helicopter's flight control computer. Then they invited expert hackers to break into the software. After repeated attempts, the flight control system stood strong against all attempts to gain unauthorized control.
This outcome was unusual. Experienced hackers who are given direct, privileged access to software almost always find a way in. The reason is simple. Decades after the birth of computer programming, modern software products are riddled with flaws, many of which create security vulnerabilities that attackers can easily exploit to slip through digital defenses. This is why reducing the error rate in software code is essential to turn the tide against relentless computer criminals and foreign adversaries that steal wealth and menace critical infrastructure with relative impunity.
How was DARPA's custom flight control software able to shrug off its assailants? The researchers turned to formal methods, a frequently overlooked group of technologies that programmers can use to create ultra-secure, ultra-reliable software. DARPA's experiment is one of several examples that underscore the potential for formal methods to remake software security. They herald a not-too-distant future when radically safer, more secure software can allow us to embrace other emerging technologies without catastrophic consequences.
Before it is ready for primetime, any piece of software should be able to satisfy at least two criteria:
From Brookings Tech Stream
View Full Article
No entries found