In cyberspace it’s easy to get away with criminal fraud, easy to steal corporate intellectual property, and easy to penetrate governmental networks. Last spring the new Commander of USCYBERCOM, NSA’s General Keith Alexander, acknowledged for the first time that even U.S. classified networks have been penetrated.2 Not only do we fail to catch most fraud artists, IP thieves, and cyber spies—we don’t even know who most of them are. Yet every significant public and private activity—economic, social, governmental, military—depends on the security of electronic systems. Why has so little happened in 20 years to alter the fundamental vulnerability of these systems? If you’re sure this insecurity is either (a) a hoax or (b) a highly desirable form of anarchy, you can skip the rest of this column.
Presidential Directives to Fix This Problem emerge dramatically like clockwork from the White House echo chamber, chronicling a history of executive torpor. One of the following statements was made in a report to President Obama in 2009, the other by President George H.W. Bush in 1990. Guess which is which:
"Telecommunications and information processing systems are highly susceptible to interception, unauthorized electronic access, and related forms of technical exploitation, as well as other dimensions of the foreign intelligence threat."
"The architecture of the Nation’s digital infrastructure, based largely on the Internet, is not secure or resilient. Without major advances in the security of these systems or significant change in how they are constructed or operated, it is doubtful that the United States can protect itself from the growing threat of cybercrime and state-sponsored intrusions and operations."
Actually, it doesn’t much matter which is which.a In between, for the sake of nonpartisan continuity, President Clinton warned of the insecurities created by cyber-based systems and directed in 1998 that "no later than five years from today the United States shall have achieved and shall maintain the ability to protect the nation’s critical infrastructures from intentional acts that would significantly diminish" our security.6 Five years later would have been 2003.
In 2003, as if in a repeat performance of a bad play, the second President Bush stated that his cybersecurity objectives were to "[p]revent cyber attacks against America’s critical infrastructure; [r]educe national vulnerability to cyber attacks; and [m]inimize damage and recovery time from cyber attacks that do occur."7
These Presidential pronouncements will be of interest chiefly to historians and to Congressional investigators who, in the aftermath of a disaster that we can only hope will be relatively minor, will be shocked, shocked to learn that the nation was electronically naked.
Current efforts in Washington to deal with cyber insecurity are promising—but so was Sisyphus’ fourth or fifth trip up the hill. These efforts are moving at a bureaucratically feverish pitch—which is to say, slowly—and so far they have produced nothing but more declarations of urgency and more paper. Why?
Lawsuits and Markets
Change in the U.S. is driven by three things: liability, market demand, and regulatory (usually federal) action. The role and weight of these factors vary in other countries, but the U.S. experience may nevertheless be instructive transnationally since most of the world’s intellectual property is stored in the U.S., and the rest of the world perceives U.S. networks as more secure than we do.4 So let’s examine each of these three factors.
Liability has been a virtually nonexistent factor in achieving greater Internet security. This may be surprising until you ask: Liability for what, and who should bear it? Software licenses are enforceable, whether shrink-wrapped or negotiated, and they nearly always limit the manufacturer’s liability to the cost of the software. So suing the software manufacturer for allegedly lousy security would not be worth the money and effort expended. What are the damages, say, from finding your computer is an enslaved member of a botnet run out of Russia or Ukraine? And how do you prove the problem was caused by the software rather than your own sloppy online behavior?
Deciding what level of imperfection is acceptable is not a task you want your Congressional representative to perform.
Asking Congress to make software manufacturers liable for defects would be asking for trouble: All software is defective, because it’s so astoundingly complicated that even the best of it hides surprises. Deciding what level of imperfection is acceptable is not a task you want your Congressional representative to perform. Any such legislation would probably drive some creative developers out of the market. It would also slow down software development—which would not be all bad if it led to higher security. But the general public has little or no understanding of the vulnerabilities inherent in poorly developed applications. On the contrary, the public clamors for rapidly developed apps with lots of bells and whistles, so an equipment vendor that wants to control this proliferation of vulnerabilities in the name of security is in a difficult position.
Banks, merchants, and other holders of personal information do face liability for data breaches, and some have paid substantial sums for data losses under state and federal statutes granting liquidated damages for breaches. In one of the best known cases, Heartland Payments Systems may end up paying approximately $100 million as a result of a major breach, not to mention millions more in legal fees. But the defendants in such cases are buyers, not makers and designers, of the hardware and software whose deficiencies create many (but not all) cyber insecurities. Liability presumably makes these companies somewhat more vigilant in their business practices, but it doesn’t make hardware and software more secure.
Many major banks and other companies already know they have been persistently penetrated by highly skilled, stealthy, and anonymous adversaries, very likely including foreign intelligence services and their surrogates. These firms spend millions fending off attacks and cleaning their systems, yet no forensic expert can honestly tell them that all advanced persistent intrusions have been defeated. (If you have an expert who will say so, fire him right away.)
In an effective liability regime, insurers play an important role in raising standards because they tie premiums to good practices. Good automobile drivers, for example, pay less for car insurance. Without a liability dynamic, however, insurers play virtually no role in raising cyber security standards.
If liability hasn’t made cyberspace more secure, what about market demand? The simple answer is that the consuming public buys on price and has not been willing to pay for more secure software. In some cases the aftermath of identity theft is an ordeal. In most instances of credit card fraud, however, the bank absorbs 100% of the loss, so their customers have little incentive to spend more for security. (In Britain, where the customer rather than the bank usually pays, the situation is arguably worse because banks are in a better position than customers to impose higher security requirements.) Most companies also buy on price, especially in the current economic downturn.
Unfortunately we don’t know whether consumers or corporate customers would pay more for security if they knew the relative insecurities of the products on the market. As J. Alex Halderman of the University of Michigan recently noted, "most customers don’t have enough information to accurately gauge software quality, so secure software and insecure software tend to sell for about the same price."3 This could be fixed, but doing so would require agreed metrics for judging products and either the systematic disclosure of insecurities or a widely accepted testing and evaluation service that enjoyed the public’s confidence. Consumer Reports plays this role for automobiles and many other consumer products, and it wields enormous power. The same day Consumer Reports issued a "Don’t buy" recommendation for the 2010 Lexus GX 460, Toyota took the vehicle off the market. If the engineering and computer science professions could organize a software security laboratory along the lines of Consumer Reports, it would be a public service.
Federal Action
Absent market- or liability-driven improvement, there are eight steps the U.S. federal government could take to improve Internet security, and none of them would involve creating a new bureaucracy or intrusive regulation:
- Use the government’s enormous purchasing power to require higher security standards of its vendors. These standards would deal, for example, with verifiable software and firmware, means of authentication, fault tolerance, and a uniform vocabulary and taxonomy across the government in purchasing and evaluation. The Federal Acquisition Regulations, guided by the National Institute of Standards and Technology, could drive higher security into the entire market by ensuring federal demand for better products.
- Amend the Privacy Act to make it clear that Internet Service Providers (ISPs) must disclose to one another and to their customers when a customer’s computer has become part of a botnet, regardless of the ISP’s customer contract, and may disclose that fact to a party that is not its own customer. ISPs may complain that such a service should be elective, at a price. That’s equivalent to arguing that cars should be allowed on the highway without brakes, lights, and seatbelts. This requirement would generate significant remedial business.
- Define behaviors that would permit ISPs to block or sequester traffic from botnet-controlled addresses—not merely from the botnet’s command-and-control center.
- Forbid federal agencies from doing business with any ISP that is a hospitable host for botnets, and publicize the list of such companies.
- Require bond issuers that are subject to the jurisdiction of the Federal Energy Regulatory Commission to disclose in the "Risk Factors" section of their prospectuses whether the command-and-control features of their SCADA networks are connected to the Internet or other publicly accessible network. Issuers would scream about this, even though a recent McAfee study plainly indicates that many of them that do follow this risky practice think it creates an "unresolved security issue."1 SCADA networks were built for isolated, limited access systems. Allowing them to be controlled via public networks is rash. This point was driven home forcefully this summer by discovery of the "Stuxnet" computer worm, which was specifically designed to attack SCADA systems.4 Yet public utilities show no sign of ramping up their typically primitive systems.
- Increase support for research into attribution techniques, verifiable software and firmware, and the benefits of moving more security functions into hardware.
- Definitively remove the antitrust concern when U.S.-based firms collaborate on researching, developing, or implementing security functions.
- Engage like-minded governments to create international authorities to take down botnets and make naming-and-addressing protocols more difficult to spoof.
Political Will
These practical steps would not solve all problems of cyber insecurity but they would dramatically improve it. Nor would they involve government snooping and or reengineering the Internet or other grandiose schemes. They would require a clear-headed understanding of the risks to privacy, intellectual property, and national security when an entire society relies for its commercial, governmental, and military functions on a decades-old information system designed for a small number of university and government researchers.
Translating repeated diagnoses of insecurity into effective treatment would also require the political will to marshal the resources and effort necessary to do something about it. The Bush Administration came by that will too late in the game, and the Obama Administration has yet to acquire it. After his inauguration, Obama dithered for nine months over the package of excellent recommendations put on his desk by a nonpolitical team of civil servants from several departments and agencies. The Administration’s lack of interest was palpable; its hands are full with a war, health care, and a bad economy. In difficult economic times the President naturally prefers invisible risk to visible expense and is understandably reluctant to increase costs for business. In the best of times cross-departmental (or cross-ministerial) governance would be extremely difficult—and not just in the U.S. Doing it well requires an interdepartmental organ of directive power that can muscle entrenched and often parochial bureaucracies, and in the cyber arena, we simply don’t have it. The media, which never tires of the cliché, told us we were getting a cyber "czar," but the newly created cyber "Coordinator" actually has no directive power and has yet to prove his value in coordinating, let alone governing, the many departments and agencies with an interest in electronic networks.
And so cyber-enabled crime and political and economic espionage continue apace, and the risk of infrastructure failure mounts. As for me, I’m already drafting the next Presidential Directive. It sounds a lot like the last one.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment