We love our myths. In fact, we get downright grumpy when they are challenged. Unfortunately, in order to make progress, we sometimes need to overcome our dearly held perceptions, confront reality, and try to come up with new approaches grounded in that reality. In computer security, we don’t do a particularly good job of that. In fact, we often fall into a deadly cycle I like to call "If something doesn’t work, try it harder." Rather than rethinking our approaches, we try to squeeze additional efficiencies out of methods of already questionable effectiveness.
Consider the problem of computer viruses. The first-generation anti-virus products required users to periodically update their knowledge bases. Consequently, viruses got past them fairly often. Users, we discovered, want to install software and forget about it. In response, the current generation of anti-virus products automatically update themselves thanks to the now-ubiquitous Internet connection. This overcame the problem of user forgetfulness, but did it solve the virus problem? If something doesn’t work, try it harder. I’m not saying that anti-virus products are bad, but if you look at the overall approach implemented in this model, it’s a "fail-open" design. The default behavior of virtually all desktop operating systems favors the user’s ability to execute any software, regardless of its provenance. You might think that "fail-closed" would make more sense. Why haven’t we fully explored the option of having the system run only code that has been authorized, and preventing everything else? It might be a little paradigm shift, but it’s fundamentally a more solid model. In fact, some early anti-virus products did implement fail-closed execution, but they weren’t popular because they were too restrictive.
Then there’s the myth of the network firewall. First-generation firewalls were extremely restrictive devices that passed only a few kinds of traffic. Generally the traffic was only what could safely be transferred across a network boundary. Customers didn’t like them because they were too restrictive, and replaced them with more permissive devices that allowed fat streams of HTTP traffic back and forth. And, for the last 10 years, it’s been open season on corporate networks. Now, a new generation of content-filtering application gateways (firewalls by any other name are still firewalls) is coming into vogue, and they’re reimplementing the original fail-closed firewall design. First-generation firewalls were too restrictive but, as it turns out, they were restrictive enough.
However, in the meantime we do ourselves incredible damage by believing in the myths of goodness we attach to these products and approaches. "If we just make it a little faster" or "If we just add a few more rules and signatures" blinds us to the fact that we’ve chosen to try to walk a path of fail-open in a world where safety comes from failing closed. It’s not just a matter of tweaking our firewall rules—some networks should not have firewalls at all: they should not be connected to other networks under any circumstances. It’s not a matter of trying harder, sometimes it’s a matter of not trying at all. As an industry we invest massive amounts of time and effort in patching security flaws in Internet-facing applications to keep hackers at bay; perhaps it’s time to consider using server applications designed for security on our Internet-facing systems, instead of convenient off-the-shelf shovelware. No amount of patching can turn software that was never designed to be secure into secure software—yet we invest so much time and effort in doing so that we could have easily amortized the up-front cost of buying the right tools for the job in the first place.
So what should we do? When you run across a problem area where you see generations of incremental improvements question the underlying assumptions of the approaches in use. Don’t just pursue the next incremental improvement to the current approach if it distracts you from solving the underlying problem. As I look back at the computer security industry and all the products that have come and gone within it, I realize most of them are Band-Aids that promise us a myth of open access with absolute security. I’m beginning to realize that a better approach is to try to figure out the fundamentals, then address the human factors that make doing things right unpalatable for our end users. Most of the interesting problems in computer security happen because doing it right is inconvenient. That, perhaps, is the most damaging myth of security: that security has to be inconvenient. Let’s explode that myth!
Join the Discussion (0)
Become a Member or Sign In to Post a Comment