Sign In

Communications of the ACM


Yet Another Major Data Breach, But Will This Time Be Any Different?

Carnegie Mellon Associate Professor Jason Hong

I've been working on this blog entry about data breaches for a while, and have to keep updating it because every few weeks there is a new "biggest hack ever" reported in the news. It started with Target, then eBay, then Home Depot, then JP Morgan Chase, then Celebgate, and then K-Mart. Now, reports are coming out that Sony has had up to 100 TB of corporate data stolen, possibly including internal memos, health records, salaries, and videos of unreleased movies.

These data breaches are software engineering's equivalent of the famous Tacoma Narrows bridge, which spectacularly collapsed in 1940. The worst part is, these data breaches are only going to get worse. The next big attack is already happening right now. It just hasn't been publicized yet.

I think everyone agrees that these data breaches are very, very bad. What surprises me is our community's anemic response. For example, after the Tacoma Narrows bridge collapse, a federal commission was set up to investigate the cause of the failure. It also led to a great deal of research into aerodynamics and resonance. Perhaps most importantly, the collapse has been seared into the minds of every engineering student as an example of a massive failure, but also an example of their responsibility to society to make sure they get things right.

But, outside of the cybersecurity experts being tasked to fix things, these disasters haven't really resonated in the same way to those of us academia, industry, or public policy. There doesn't seem to be the same sense of urgency to dramatically improve how we build software, research new kinds of processes and tools for cybersecurity, or educate sysadmins and students. We just don't seem to be learning many lessons from these repeated catastrophic failures. 

While there are many explanations for our community's lack of response, one thing that is hamstringing us as a community is a complete lack of information about how the breaches actually happened. The Sony breach will probably follow the same pattern as nearly every other breach, in that no detailed information will be released to the public, with only rumors and whispers shared in the hallways of various conferences. And, of course, we're often asked not to repeat what we've been told.

Having been part of the cybersecurity community for the past decade, both in research and industry, I can fully understand the reluctance to share information. First, it takes a lot of time to analyze and understand what happened, time that could be spent on limiting damage and preventing future attacks. Second, no one wants to disclose what defenses already exist and what new defenses have been put into place, as it may give information to a new wave of attackers. This kind of security through obscurity does make sense. Third, these breaches often involve proprietary information about what capabilities an organization has and how they operate. Fourth, it's just painful and embarrassing to keep talking about a highly public failure at your organization. 

I can sympathize with all of these reasons. However, the lack of comprehensive information ultimately hurts all of us. While there are several good annual reports on security (such as the Verizon Data Breach Investigations report, Cisco's Annual Security Report, and Microsoft's Security Intelligence Report), these reports primarily offer general statistics and trends about cybersecurity, and don't offer enough specific details or insights about attacks. 

How can we design more effective countermeasures, organize better responses, or train the next generation of security professionals, if we don't have a clear idea of what the attack vectors are, what strategies attackers are using, and what kinds of defenses are and aren't working?

To give some examples, how many of these breaches were due to insider attacks? How many attacks bypassed automated defenses? How many started with social engineering attacks? How many were due to poor programming practices or misconfiguration? Why weren't the attacks detected earlier? Perhaps most importantly, what should we, both as individuals and organizations, be doing? What kinds of end-user training do we need, so individual employees can protect themselves and their organization? What kinds of automated countermeasures should organizations be putting into place? What kinds of infrastructure should we all be contributing to, to detect and respond to these kinds of attacks better?

One good example of what we should be aiming for is Mandiant's report on APT1, which provided detailed analysis of one specific hacker group's attacks on a large number of organizations. The report included concrete examples of emails used for social engineering, analysis of some of the malware used, and specific strategies that the group used.

In fact, I'd even go a step further and suggest the creation of something like the US National Transportation Safety Board, which investigates major accidents with our railroads, highways, and aviation systems (check out the reports at The goal would not be to assign blame, but rather to determine probable causes for a breach, evaluate the effectiveness of procedures and cybersecurity systems, and offer recommendations.

There actually are a lot of cybersecurity organizations out there, both government and industry, including NCFTA, NCSA, APWG, US-CERT, IC3, and NSA's Information Assurance division, though these groups have different missions and incentive structures, and don't tend to publish reports about specific incidents. What I'm suggesting is an organization that could either investigate the causes of massive failures, or aggregate  results of several linked attacks so as to offer more insights about patterns of attackers (as with the Mandiant APT1 report mentioned earlier). Most importantly, this organization could offer somewhat sanitized but public reports that can give more insight about how these incidents happened, so that we as a community can learn from our mistakes and figure out better ways of moving forward. 

Special thanks to Idris Hsi for the connection between cybersecurity failures and the Tacoma Narrows bridge.


Cassidy Alan

I think yet another federal security bureaucracy would be a mistake. Among other things, it would be subsumed into the general D.C. secret security infrastructure, due to its cybersecurity nature, and we already have 16 federal intelligence agencies (that we know about publicly). They already step on each others' toes.

And they never take blame for anything. One agency had pegged four bad guys coming into the USA for example, the other agency ignored its information, and we paid on that famous September day for giving up a little bit of freedom for safety.

Those agencies, we now know, also stop at nothing to sweep up all our private lives into their treasure chest. The giant sucking sound Ross Perot famously talked about has turned out to be the flood of data going into the giant data vacuum run from D.C. No thanks.

Meantime, no government manager can be incentivized the way Sony, Target, Home Depot, or any other private enterprise can, to "clean up their act". Government bureaucrats that fail get promoted, security officers at private companies get fired. And the ownership that plays loose with your credit card numbers loses its meat and potatoes revenue really fast.

There are six times more private security personnel than in government. The private sector already does not trust governments to prevent other kinds of security breaches. Social security data is as profligate as weeds already.

Displaying 1 comment