Opinion
Computing Applications Viewpoint

Enforce Pola on Processes to Control Viruses

Applying the Principle of Least Authority on processes, instead of just on people, limits damage from viruses and worms.
Posted
  1. Article
  2. References
  3. Author

I applaud the software industry for trying to produce code less susceptible to attack, but I’m afraid the effort is futile; it can succeed only if it achieves perfection. A suitable flaw in any piece of software, whether provided by the operating system vendor, a third party, or even the user, can be used to grant an attacker all the privileges of the user running the software. Reducing the number of such flaws would make finding points of attack more difficult, though when found, they could still likely be exploited.

The mistake software designers make is in asking "How can we prevent attacks?" when they should be asking "How can we limit the damage that can be done when an attack succeeds?" The former assumes infallibility; the latter recognizes that writing software is a human process. A practitioner’s answer to the first question is usually "By fixing our code." The answer to the second is invariably "By enforcing the Principle of Least Authority, or POLA."

Nearly all of today’s operating systems, including many from Microsoft and all flavors of Unix, enforce POLA, but only at the level of the user. This approach is a good way to prevent me from doing something I am not allowed to do but does nothing to prevent a process acting on my behalf from doing something I am allowed to do but don’t want to do. That’s exactly what a virus does; it can corrupt any file I am authorized to modify. There is no reason the process displaying my email should be able to read my tax return; there is no reason the process should be able to open a network connection to send the data to an attacker. But both are possible if POLA applies only to the user.

A better approach is to enforce POLA at the level of the process or even objects within a process. Doing so limits the damage a successful attack might do to the set of actions the process or object would need to do its job. It might destroy the message displayed by the email program, but it can’t replace a command with a Trojan horse because it lacks access to that file. It may read my email, but it can’t send it to the attacker because it can’t open a network connection.

Such fine-grain control of permissions sounds like a user-interface nightmare. Must we put up with thousands of nagging "May I?" prompts? Ka-Ping Yee of the University of California, Berkeley, has shown that the answer is no; user actions implicitly specify the desired permissions [5]. For example, by double-clicking on a Word document in a file list, I am telling the system that the process running Word should be allowed to read and write only this specific file. I don’t have to worry that a macro virus might overwrite the Normal.dot and infect other documents. I don’t have to worry that the virus might open a network connection and send my document to my competitor. That’s because I never said to the operating system that macros in the file should have access to the Normal.dot template or to the network.

The primary flaw in today’s operating systems is that access control is ultimately based on user identity. This means that any process acting on my behalf necessarily runs with my identity—and therefore has all my privileges. In such a system, I prove my identity to the system administrator who sets up an account for me. This account maps one-to-one with my identity, embodying my access rights in the form of an entry in the access control list (ACL) of every resource I’m allowed to use that says what I’m allowed to do with each of these resources. Each process I run carries my account identity with it, and every request to the operating system presents this identity. Access is allowed or denied based on the entry associated with this identity in the ACL for the specified resource.


We should be asking "How can we limit the damage that can be done when an attack succeeds?"


An alternative approach is to separate authentication (who I am) from authorization (what I am allowed to do) from access control (whether or not to honor a request). In such a system, I prove my identity to an administrative component of the operating system and receive the set of authorizations I should have. These authorizations can be validated or rejected by the access control mechanism when I make a request. Since I have an explicit set of authorizations, I can choose which to give each process I start. So even if a virus takes complete control of the process I’m using to read my email, it cannot erase, say, all my spreadsheets.

An operating system designed and built along these lines would be far more flexible and manageable than the ones we have today. No single authenticator is needed. For example, Hewlett-Packard corporate would be able to handle my identity as an HP employee and give me the set of authorizations it administers; HP Laboratories would be able to do the same for its authorizations. If I were to move from HP Labs to an HP product division, only those two organizations would need to update my authorizations. Moreover, the authenticator doesn’t need to interpret the authorizations; it just hands them out. Similarly, the access-control mechanism need not concern itself with who I am or how I got my authorization; it would only need to allow or deny access. Doing things this way allows the authentication and access control systems to evolve independently, while new kinds of authorization could be introduced without modifying existing systems.

Such a change in the way we all think about our systems would seem to require we throw out everything related to security, authorization, and privileges and start from scratch. Indeed, some researchers take this approach. For example, the Extremely Reliable Operating System or EROS, project run by Jonathan Shapiro at Johns Hopkins University [2] is building an entirely new operating system that enforces POLA as an inherent part of its architecture. Less disruptive are the approaches that merely require that programmers rewrite applications in a POLA language; an example is the E language being developed by Mark Miller of Hewlett-Packard Laboratories [1], while Marc Stiegler of Combex, Inc., has written a fully functional desktop in E with all the desired properties [3]. Miller and Stiegler have together produced a prototype Web browser under DARPA contract that safely views Web pages that would otherwise infect a user’s machine if any other browser were used [4].

Less disruptive measures are needed if programmers are to maintain compatibility with existing software, but they must be willing to forego some of the finer points of POLA. Attacks almost always involve some interaction with the operating system, so system designers can do most of what’s needed by filtering all kernel calls. The filtering, which need not be done in the operating system, can be done by a user-level process if the operating system has a "trampoline" to bounce kernel calls back up. Most flavors of Unix provide an appropriate interface, and a relatively small change to Windows would add this functionality. The Windows file system redirector already provides much of what is needed. Alternatively, the user could run each process in a virtual machine and filter kernel calls at the interface to the underlying operating system kernel.

Even less intrusive in terms of configuring a system is to dynamically create an account with precisely the permissions the user wants the process to have. Unfortunately, without changes to the process launcher, the user has no way to tell what rights the new process should have. It’s worth the effort in terms of overall system security, though, because solving this problem would prevent the worm from propagating and the virus from installing a backdoor.

Although reducing the number of exploitable flaws in software has benefits, it still will not eliminate successful attacks. Far better is to minimize the exploitability of the flaws that inevitably will exist. Only by remembering that system administrators give privileges to people, but the operating system enforces access control on processes, will we be able to design software less vulnerable than what we use today.

Back to Top

Back to Top

    1. Miller, M. The E Language; see erights.org.

    2. Shapiro, J., Smith, J., and Farber, D. EROS: A fast capability system. In Proceedings of the 17th ACM Symposium on Operating Systems Principles (Kiawah Island Resort, Charleston, SC, Dec.). ACM Press, New York, 1999; see www.eros-os.org.

    3. Stiegler, M. CapDesk; see www.combex.com/tech/edesk.html.

    4. Wagner, D. and Tribble, D. A Security Analysis of the Combex DarpaBrowser Architecture; see www.combex.com/papers/darpa-review/security-review.html.

    5. Yee, K.-P. User interaction design for secure systems. In Proceedings of the 4th International Conference on Information and Communications Security (Singapore, Dec. 9–12). Springer, 2002; see zesty.ca/sid/.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More