Sign In

Communications of the ACM

Communications of the ACM

Computer Security?an End State?

View as: Print Mobile App ACM Digital Library Full Text (PDF) Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook

It seems that one cannot open a newspaper without reading about yet another computer security breach. Worse yet, even sites that should be well protected, such as the CIA's Web site, have been hacked. Is this inevitable? Will matters continue to get worse? Or is there some fix in sight for the computer security problem?

Over the years, a number of fixes have been proposed. Early systems introduced the notion of a privileged state. Concepts such as the reference monitor and the trusted path have been codified. In the network sphere, some swear by encryption, while others deploy firewalls. We've even had a variety of government security standards, such as the Orange Book and the Common criteria. But computers are still being hacked. Why haven't these solutions worked?

In point of fact, most security problems are caused by buggy software. Buggy software is the oldest unsolved problem in computer science, and I don't expect that to change in the foreseeable future. Furthermore, the various panaceas proposed in this areastructured programming, high-level languages, formal methods, n-version programming, code walk-throughs, and othershave not succeeded. There has certainly been progress; it is no longer surprising when I find that my departmental computer server has been running continuously for six months or more. But we are still a long way from perfection. And we cannot afford 25-year shakedown periods before the complex new applications we are deploying become reliable.

Put another way, we cannot have secure computer systems until we can build correct systems, we don't know how to accomplish this, and probably never will. Fred Brooks said it best in his essay, "No Silver Bullets": Not only are there no silver bullets now in view, the very nature of software makes it unlikely there will be anyno inventions that will do for software productivity, reliability, and simplicity what electronics, transistors, and large-scale integration did for computer hardware.

A corollary of this is that we cannot achieve drastic improvements in computer security. Does this mean we are doomed? I don't think so, but we will have to adjust our attitudes, our expectations, andof courseour professional practices. The most important change is to realize and accept that our software will be buggy, will have holes, and will be insecure. Saying this is no different than saying California will experience earthquakes. We don't know precisely where or when they will strike, but we know what to do to in advance: build quake-resistant structures, plan for disaster relief, and then go about our business.

We need to do the same sorts of things in the cyber world. The challenge, though, is to learn how to build hack-resistant systems. Not hack-proof systems (as I have said, that is unobtainable), but systems that can cope with the failure, under attack, of some components. Thus, although the Web site of a brokerage house might be defaced, the system architecture would be such that the account database isn't at risk. Alternatively, perhaps the account database could be compromised, but there would be sufficient backups and transaction logs so no loss of information would occur.

The second major change we must adopt is to simplify security-critical programs. In the abstract, this principle is obvious; what is less obvious is that many more programs are now security-critical. Ten years ago, who would have thought a word processor should be part of the trusted computing base? There is no way to be assured of the security of such a complex component; the only possible solutions are to split off the security-sensitive pieces into small, auditable modules, or to provide new operating system primitives that will have the same effect.

There are certainly other technical approaches. For example, we can build fault-tolerant systems out of unreliable components. Is there a way to do the same for security? While that might improve the odds, it is unlikely to provide a perfect security shield. Fault-tolerant systems deal with natural failures, and nature (Einstein reminded us) is subtle but not malicious. Hackers do their best to shift the odds and to create improbable situations they can exploit.

If we succeed at this challengeif we can build distributed systems and a cyber society that is attack-resistantthen our networks should survive and even flourish. No one expects major cities to be 100% crime-free, but we do expect to be able to carry out our daily activities with a reasonable degree of safety. The same can and should be true of the Net. There will never be absolute safety and perfect assurance, online or offbut there never was.

The Vandals became vandals, and descended to slashing car tires. Today they are hackers, and deface Web sites. We must ensure they do not become Hackers and destroy our cities, or even our enjoyment of them.

Back to Top


Steven M. Bellovin ( is a researcher at AT&T Labs in Florham Park, NJ.

Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2001 ACM, Inc.


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents:
  • Article
  • Author