The information security literature is filled with risk pathologies, heuristics that we use to help us evaluate risks. I’ve collected them from many different sources.
When you look over the list of exaggerated and downplayed risks in the table here, the most remarkable thing is how reasonable so many of them seem. This makes sense for two reasons. One, our perceptions of risk are deeply ingrained in our brains, the result of millions of years of evolution. And two, our perceptions of risk are generally pretty good, and are what have kept us alive and reproducing during those millions of years of evolution.
This is an important point. A general intuition about risks is central to life on this planet. Imagine a rabbit sitting in a field, eating clover. Suddenly, the rabbit notices a fox. The rabbit must then make a risk evaluation: stay or flee? The rabbits that are good at making these evaluations are going to reproduce, while those that are not good at making these evaluations are either going to get eaten or starve. This means that, as a successful species on the planet, humans should be really good at evaluating risks.
And yet, at the same time we seem hopelessly bad at making these types of risk evaluations. We exaggerate some risks while minimizing others. We misunderstand or mischaracterize risks. Even simple security we get wrong, wrong, wrong—again and again. It’s a seeming paradox.
The truth is that we are very well adapted to dealing with the security environment endemic to hominids living in small family groups on the highland plains of East Africa. However, the environment of New York City in 2007 is different from Kenya circa 100,000 BC. And so our perception of risk diverges from the reality of risk, and we get things wrong.
When our risk perceptions fail today, it is usually because of new situations that have occurred at a faster rate than evolution: situations that exist in the world of 2007, but didn’t in the world of 100,000 BC. Like a squirrel whose predator-evasion techniques fail when confronted with an automobile, or a passenger pigeon who finds that evolution prepared him to survive the hawk but not the shotgun, our innate capabilities to deal with risk can fail when confronted with such things as modern human society, technology, and the media. And, even worse, they can be made to fail by others—politicians, marketers, and so on—who exploit our natural failures for their own gain.
This topic is explored in greater detail in my essay, “The Psychology of Security,” available at www.schneier.com/essay-155.html.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment