Opinion
Architecture and Hardware

Inside Risks: Cyber Underwriters Lab

Posted
  1. Article
  2. Author

Underwriters Laboratories (UL) is an independent testing organization created in 1893, when William Henry Merrill was called in to find out why the Palace of Electricity at the Columbian Exposition in Chicago kept catching on fire (which is not the best way to tout the wonders of electricity). After making the exhibit safe, he realized he had a business model on his hands. Eventually, if your electrical equipment wasn’t UL-certified, you couldn’t get insurance.

Today, UL rates all kinds of equipment, not just electrical. Safes, for example, are rated based on time to crack and strength of materials. A "TL–15" rating means that the safe is secure against a burglar who is limited to safecracking tools and 15 minutes’ working time. These ratings are not theoretical; employed by UL, actual hotshot safecrackers take actual safes and test them. Applying this sort of thinking to computer networks—firewalls, operating systems, Web servers—is a natural idea. And the newly formed Center for Internet Security (no relation to UL) plans to implement it.

This is not a good idea, not now, and possibly not ever. First, network security is too much of a moving target. Safes are easy; safecracking tools don’t change much. Not so with the Internet. There are always new vulnerabilities, new attacks, new countermeasures; any rating is likely to become obsolete within months, if not weeks.

Second, network security is much too difficult to test. Modern software is obscenely complex: there is an enormous number of features, configurations, and implementations. And then there are interactions between different products, different vendors, and different networks. Testing any reasonably sized software product would cost millions of dollars, and wouldn’t guarantee anything at the end. Testing is inherently incomplete. If you updated the product, you’d have to test it all over again.

Third, how would we make security ratings meaningful? Intuitively, I know what it means to have a safe rated at 30 minutes and another rated at an hour. But computer attacks don’t take time in the same way that safecracking does. The Center for Internet Security talks about a rating from 1 to 10. What does a 9 mean? What does a 3 mean? How can ratings be anything other than binary: either there is a vulnerability or there isn’t.

The moving-target problem particularly exacerbates this issue. Imagine a server with a 10 rating; there are no known weaknesses. Someone publishes a single vulnerability that allows an attacker to easily break in. Once a sophisticated attack has been discovered, the effort to replicate it is effectively zero. What is the server’s rating then? 9? 1? How does the Center re-rate the server once it is updated? How are users notified of new ratings? Do different patch levels have different ratings?

Fourth, how should a rating address context? Network components would be certified in isolation, but deployed in a complex interacting environment. Ratings cannot take into account all possible operating environments and interactions. It is common to have several individual "secure" components completely fail a security requirement when they are forced to interact with one another.

And fifth, how does this concept combine with security practices? Today the biggest problem with firewalls is not how they’re built, but how they’re configured. How does a security rating take that into account, along with other people problems: users naively executing email attachments, or resetting passwords when a stranger calls and asks them to?

This is not to say that there’s no hope. Eventually, the insurance industry will drive network security, and then some sort of independent testing is inevitable. But providing a rating, or a seal of approval, doesn’t have any meaning right now.

Ideas like this are part of the Citadel model of security, as opposed to the Insurance model. The Citadel model basically says, "If you have this stuff and do these things, then you’ll be safe." The Insurance model says, "Inevitably things will go wrong, so you need to plan for what happens when they do." In theory, the Citadel model is a much better model than the pessimistic, fatalistic Insurance model. But in practice, no one has ever built a citadel that is both functional and dependable.

The Center for Internet Security has the potential to become yet another "extort-a-standard" body, which charges companies for a seal of approval. This is not to disparage the motives of those behind the center; you can be an ethical extortionist with completely honorable intentions. What makes it extortion is the decrement from not paying. If you don’t have the "Security Seal of Approval," then (tsk, tsk) you’re just not concerned about security.

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More