The cyber insecurities of the Internet are widely touted as precursors of a "Cyber Pearl Harbor," that could by some reckonings mark the end of civilization. What if this is not a grave risk at all? What if the systems aspects we point at most frequently as the sources of vulnerabilities are actually assets in tamping down the risk? What if we spent more time developing our ability to be resilient rather than to provide absolute security?
Andrew Odlyzko—mathematician, cryptographer, author, and information technology analyst—has been asking these questions and has provided a thorough analysis. His contrarian ideas have provoked controversy.
I talked to him about this.
Q: Military tensions among major powers have been escalating in the past few years. Government leaders are openly worried that a military-grade preemptive cyber attack could devastate a nation. What do you think of this?
A: As we increase our reliance on digital technologies, attackers will find networks of computers increasingly attractive targets. So yes, there will surely be a "Cyber Pearl Harbor." We know not the day nor the hour.
What we have to remember is that a devastating cyber event can result not only from hostile attacks but also from natural events such as solar coronal ejections that fry electronics on Earth. We are also subject to devastations from other events such as conventional wars, terror attacks, earthquakes, tsunamis, or superstorms. Some disasters are caused by innocent human mistakes, too, simple coding or operational errors, or unanticipated interactions of complex systems. Any of these events can lay waste to a region or country. It is impossible to prevent all these disasters. So the question must be: How do we prepare with maximum resiliency to recover rapidly? And how much of that effort should be devoted to security in the cyber realm?
Q: Given the range of possible devastating cyber disasters, what security measures would you recommend?
A: It depends very much on the nature of the organization. A business should focus on protecting the business; a government agency should focus on protecting the country.
Massive cyber attacks are certainly a threat. But it seems fairly well established that they can only be launched by sophisticated, well-resourced adversaries who have ample time to prepare. That basically means nation-states and possibly terrorist and criminal organizations. Such adversaries must be dealt with by national military and intelligence agencies and by international collaborative efforts. Just as we never expected most citizens to build personal fallout shelters, we should not expect them to acquire and manage information systems that would resist attacks by a determined large agency.
Governments rely also on strategic doctrines such as the balance between offense and defense and their ability to deter aggressive acts. Those actions obviously influence the probabilities of massive attacks.
More than anything, government agencies must concentrate on general resilience. Note that resilience is desirable in general, not just against hostile attacks. Protection against bioterrorism does not differ much from protection against natural pandemics. Similarly, restoration of computer networks is similar whether they are brought down by a geomagnetic storm, an electro-magnetic pulse from a nuclear explosion in space, or a cyber attack.
Q: But what about non-government organizations? What should they do?
A: A business or educational institution should worry primarily about the mundane attacks that affect its operations. This effort is of general value because protection against the mundane also reduces exposure to massive attacks in the Internet.
Standard measures such as antivirus software, firewalls, two-factor authentication, security training, and basic security practices are what regular enterprises should concentrate on.
All organizations should make it a priority to protect their data through regular, hard-to-corrupt backups. The ability to restore data is an essential part of resilience and recovery.
Enterprises can further increase their resiliency by participating in backup communication networks, including even amateur (ham) radio. And I could go on to list more steps of similar nature, all helpful in securing cyberspace.
Q: All the measures you have cited are standard ones. They have been advocated by security experts for decades. Why has your ACM Ubiquity essay (see https://bit.ly/2G5b76S) caused controversy?
A: The utter familiarity of this advice is a key part of my argument. We have known for decades of these methods for improving cybersecurity. They are taught widely in courses and discussed in books. They are not secret. Yet most of the damaging cyber attacks we have suffered could have been prevented by implementing those measures.
So the big questions are: Why were those steps not taken, and what has been the result? My (controversial) answer is that cybersecurity has simply not been very important. The "Cyber Pearl Harbor" scenarios are seen as far removed from day-to-day operations of civilian enterprises. What they have to deal with is regular crime and regular mistakes, similar to what they have always faced in the physical realm. There have been a few headline-grabbing cyber attacks involving theft of personal identification information from firms with large databases. These are a small percentage of all cyber attacks. These events illustrate my point. The companies involved did not consider the risk of massive theft to be important enough to invest in strong security measures. They now see that they were wrong.
More than anything, government agencies must concentrate on general resilience.
We have an online ecosystem in which crime is being kept within bounds by countermeasures by enterprises and law enforcement agencies. In almost all cases, criminals aim to steal data or money without divulging their identities or destroying systems.
As the economy and society at large increase their dependence on information technologies, crime is migrating into cyberspace. As a result, more resources are being put into cybersecurity. This is happening at a measured pace without drastic reengineering of our systems.
Q: Security experts have said that much software code is a mess of "spaghetti" that cannot be verified as correct. There is a dark industry that painstakingly searches through the tangled codes and sells its findings as "zero day exploits" on the black market. Purchasers of these exploits are able to launch surprise attacks and inflict serious damage before the victims are able to defend themselves with new patches. On what basis have you concluded that "spaghetti code" is not a great risk?
A: I have not concluded that at all. "Spaghetti code" is a risk, and is indeed continually being exploited by attackers. What I point out is that "spaghetti code" also has positive features. Attackers are seldom able to make clean penetrations that leave no traces, and when they insert their own malware, they often mess up.
Stuxnet—a virus that damaged Iranian nuclear centrifuges—is a famous example. Although attribution has been difficult, security experts have placed a strong likelihood that Stuxnet was a collaboration between the U.S. and Israel, based on the style of coding, similarity to other programs, and the variable names used. And, of course, the creators of Stuxnet did slip up fairly substantially in that it escaped into the wild from the Iranian facilities.
Q: When I grew up, operating systems were much smaller and more cleanly organized. Some early operating systems were under 50-thousand lines of code. Today’s major operating systems are closing in on 100-mil-lion lines, and one of the open source Linux distributions is near 500 million. None of those systems has been formally verified. They are cited as premier examples of spaghetti code. And yet today’s major operating systems are amazingly reliable compared to the old. How do you explain the rise of reliability along with the rise of complexity?
A: Much of the progress is due to the superabundance of storage space and cycles. This enables us to tolerate the bloat induced by patches and repairs, most of which are to the mass of software outside the operating system kernel. The accumulation of patches does generally make systems more reliable. Further, designers now devote a lot of resources to programs that monitor other programs and they test far more exhaustively than before. Even though there are strange states you can push systems into—which is what many hostile exploits do—those states tend not to occur in the situations that matter to regular users most of the time.
Many of the prescriptions of software engineering are violated routinely. For example, we know how to eliminate the continuing vulnerability to buffer overruns—but we have not done so. Still, progress has been substantial. Disciplined coding practices and isolation techniques such as sandboxing have been major factors improving reliability.
As the economy and society at large increase their dependence on information technologies, crime is migrating to cyberspace.
As you mentioned, we are unable to formally verify the giant operating systems we most rely on. But we can formally verify small systems, such as those needed to run reliable backup systems. Those are key to recovery, and thus to resilience.
Q: You have said that some of the still-popular older technologies for security such as firewalls are less secure. Can you say more?
A: Firewalls have been getting less effective. One reason is that more and more of the traffic is encrypted, and thus increasingly difficult for firewalls to classify. Another is that the entire digital environment of the enterprise has changed. Originally, firewalls were a good way to protect trusted internal systems from hostile penetration. Today the architecture of enterprises has changed considerably. Their systems are intertwined with those of suppliers, partners, and customers, as well as with devices owned by employees. Much computation happens in the cloud, not the local network. In this environment, security professionals have less ability to see and control what is happening. There is no well-defined security perimeter for a firewall to protect.
In addition, far more of the attacks rely on human engineering—for example, phishing, whaling, ransomware, frauds, deceptions, social engineering. Firewalls cannot stop them.
On the other hand, firewalls continue to improve. They are far more sophisticated than their early incarnations of two decades ago. They are not about to disappear.
Q: You have a reputation for taking contrarian stands on issues. This seems to result from your desire to understand whether popular claims stand on solid ground—and frequently they do not. A few years ago you challenged Metcalfe’s Law that the value of a network grows with the square of the number of nodes. What was your challenge and what came of that?
A: The argument (developed in a paper with Briscoe and Tilly) was that Metcalfe’s Law overestimated the value of a network. We proposed that usually a more accurate measure was given by the product of the number of nodes and the logarithm of that. This proposal has held up quite well. This leads to a more realistic view of the size of network effects for new technologies, and therefore of the prospects of new ventures.
More generally, contrary opinions do help broaden people’s horizons and prepare them for the inevitable surprises. In some cases, the dominant consensus is not just wrong, but leads to substantial waste of time and resources. That is the case with the apocalyptic claims about cybersecurity. There is much talk about need for drastic action and reengineering our systems from the ground up. But this talk is not matched by actions. Technologists overestimate their chances of making big impacts with their radical proposals. There is a need for improved security technologies, but when we look at decisions that are being made, we see they implicitly assume that security is important but not urgent. This is likely to continue. I expect us to continue to make good progress in staying ahead of criminals and attackers without radical changes in Internet and operating system architectures.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment