Cyber Protection has long been a concern; recall the Morris worm in 1988, widespread use of the commons with the introduction of commercial email and Web browsers in the early 1990s, and the U.S. Presidential Commission on Critical Infrastructure Protection (PCCIP) in 1996.11 A Google search yields more than 43 million articles dealing with computers and networks. This much attention, without dependable security for users, leads one to wonder why the problem persists. Are computer vulnerabilities growing faster than measures to reduce them? Perhaps the problem is not purely a technical matter, but more to do with users. Carelessness in protecting oneself, tolerance of bug-filled software, vendors selling inadequately tested products, or the unappreciated complexity of network connectivity have led today’s abuse of the commons.
Key Insights
- Top-down processes (such as regulation, national strategies, federal funding, and international agreements) protecting users of the cyber commons operate far more slowly than offensive and defensive technologies.
- Bottom-up processes (such as the affinity groups that characterize social take advantage of the character of public networks, offering additional defensive options to protect them from abuse.
- These processes mimic how the ARPANET was created, contribute to network evolution, and share the concept behind the IETF and other volunteer network mechanisms.
However, among potential remedies, current U.S. government-led approaches appear to be going at them piecemeal, fixing those that demand immediate attention. Since this approach is not keeping up, it may be useful to rethink it, seeing if there are strategic directions more likely to deliver benefits.
Protecting users of the cyber commons, nationally or globally, has both top-down and bottom-up aspects. Calls for government action to “protect cyberspace” relate to top-down processes that, while identifying drivers of policy, wash out lower-level detail. That is the way governments think and what people have come to expect from them. Protecting a national commons would appear little different from other aspects of national security, which is clearly a government responsibility. In the U.S., under the recently organized Defense Department Cyber Command, the National Security Agency has been designated as the U.S. cyber force,4 including both the 24th “Air Force” and the 10th “Fleet,” in quotes because neither is a conventional flying nor floating combat unit, consisting instead of people at computers, the newest element of net-centric warfare.
Bottom-up processes are equally important; they are what “really happens,” the way processes work, rich in detail, but leave some major drivers of events invisible. The difference between the two perspectives—top-down and bottom-up—is the same as between legislation and how complex implementation rules perform in practice; complete descriptions include elements of both.
Threat Reduction
First, what threats against whom should be reduced? Starting with the universe of all users of the cyber commons worldwide, illustrative groups can be identified that share common security requirements. As sovereign states, governments have considerable latitude and resources. Infrastructure operators and communication carriers are together a particularly powerful group when they feel they have liability, responsibility, and authority. State, county, and local governments have responsibility but often lack resources, financial or human. Large private organizations have significant financial and human resources if they define the defense of the cyber commons as a sufficiently high priority (see Figure 1).
While government programs are easily justified when targeting specific sets of users for particular purposes, they leave the rest of us to fend for ourselves.
A recent National Research Council committee report examined a number of research areas relating to cybersecurity,5 offering a cybersecurity “bill of rights” that defines these user expectations:
- Availability of system and network resources to legitimate users;
- Convenient recovery from successful attacks;
- Control over and knowledge of one’s computing environment;
- Confidentiality of stored information and information exchange;
- Authentication and provenance of information;
- The technological ability to exercise fine-grain control over the flow of information in and through systems;
- Security using computing directly or indirectly in important applications, including financial, health care, and electrical transactions, as well as in real-time remote control of devices that interact with physical processes;
- The ability to access any source of information safely;
- Awareness of the security being delivered by a system or component; and
- Redress for security problems caused by another party.
While one might complain, the typical user is far from enjoying these “rights” in the cyber domain, and how to achieve them in a global commons is by no means obvious. They are perhaps more like stars to navigate by than places one can expect to reach.
Top-Down Perspective
Possible defensive actions cover at least four dimensions: mandatory protection of cyber domains essential to the economic health and quality of life; national strategies, plans, and programs, helping coordinate protection of the commons; international legal regimes and their supporting international structures, encouraging and assisting defense of the commons; and technology to warn, prevent, and thwart misuse of the commons.
There is no silver bullet. The amount and types of protection varies with individual jurisdiction and time, as adversaries and technology change and attackers refine their attacks and redefine their goals and targets.
Mandatory protection. In the U.S., regulation of private domestic activities is a function of each of the 50 states, intending to enhance public safety, increase reliability, maintain law and order, and protect citizens from exploitation. Government-owned infrastructure should be subject to the same regulation, but the governments regulate themselves and thus have some flexibility compared to private operators. Those aspects of the infrastructure on which the public depends require mandates through the agencies responsible for their oversight.
Some see regulation as a restriction on the efficient operation of markets and as foreclosing potentially beneficial options. These concerns notwithstanding, there is agreement that critical infrastructure services merit some degree of regulation. A central issue is how to define “critical” and how much regulation is enough.
Regulation implies restrictions on the operation of markets, possibly foreclosing potentially beneficial options. There is general recognition that infrastructure services merit some degree of regulation to protect against inequitable access to service and the abuse of what can be natural monopolies. Deciding what to protect defines what not to protect. By default, the latter are left to market forces. The decision of what to regulate should hinge on the allocation of resources to provide the greatest protection to the greatest number of people. This requires analyses of users, their relevance to national goals, and the interdependencies among their needs. What we currently have in the U.S. is mandated protection of central infrastructures and national security assets, with the rest dependent on market forces to balance security, cost, and convenience.
In 1997, the PCCIP identified eight critical infrastructures, and, in preparing for the expected disruption of computers at the beginning of 2000, the U.K. identified 11 critical infrastructures as central to the operation of society2; the European Commission also identified 11, though they differed from other lists.1 If one looks for the infrastructures common to such lists, along with factoring in estimates of their interdependence, three emerge: telecommunications, electric power, and transfer of funds.
Infrastructures depend on the reliable transmission of information for their operation. If one is to protect any part of the cyber commons, the command-and-control mechanism of critical infrastructures is part of what should be done.
An example of how to protect critical infrastructure is provided by the Federal Energy Regulatory Commission (FERC), the regulator of the U.S. electric-power system, consulting with and coordinating its regulatory actions with industry groups, including the North American Electric Reliability Council (NERC). The FERC Final Rule, issued in 2008 after a rule-making proceeding, is a useful starting point.3 While heretofore reliability was treated as desirable, and outages were reported to FERC and analyzed by NERC, the requirements on the industry were flexible. The Final Rule detailed actionable security processes for infrastructure protection that recognize both the realities of computer technology and the tendency of regulated entities to cut corners.
Regulators attempt to force a desired level of performance, while regulated entities deploy armies of lawyers to thwart them by bringing suit against the regulator. Regulatory actions, whether originating in independent regulatory agencies chartered by the U.S. Congress or by agencies established within the executive branch, under the separation of powers in the U.S. government, are subject to review by the federal judiciary. The judicial system and its due-process requirements are thus the final arbiter of regulations. The traditional paths to circumvent regulation are to claim the need to exercise reasonable business judgment, maintain that a higher level of risk than provided for in the regulation is adequate, and challenge the technical feasibility of the regulation.
The FERC order is firm in blocking such arguments. With regard to business judgment, the Report said the Commission noted in the Critical Infrastructure Protection Notice Of Proposed Rule-making (CIP NOPR) that “Cybersecurity standards are essential to protecting the Bulk-Power System against attacks by terrorists and others seeking to damage the grid. Because of the interconnected nature of the grid, an attack on one system can affect the entire grid. It is therefore unreasonable to allow each user, owner or operator to determine compliance with the CIP Reliability Standards based on its own ‘business interests.’ Business convenience cannot excuse compliance with mandatory Reliability Standards.”
Regarding the second tactic of evasion—operator willingness to accept risk—”The Commission continues to view the term ‘acceptance of risk’ as representing an uncontrolled exception from compliance that creates unnecessary uncertainty about the existence of potential vulnerabilities. Responsible entities should not be able to opt out of compliance with mandatory Reliability Standards. The Commission, therefore, directs the ERO [Electric Reliability Organization] to remove acceptance of risk language from the CIP Reliability Standards.”
Finally, regarding technical feasibility, the Final Rule said: “The Commission adopts the CIP NOPR proposal and directs the ERO to develop a set of conditions or criteria that a responsible entity must follow when relying on the technical feasibility exception contained in specific Requirements of the CIP Reliability Standards… We note that the Commission did not propose to eliminate references to technical feasibility from the CIP Reliability Standards, only that the term be interpreted narrowly and without reference to considerations of business judgment.”
The Congress attempted to extend the proceeding as far beyond the electric-power system as possible, but the Commission drew the line at its defined authority, saying: “The Commission is sensitive to the concerns raised by the Congressional Representatives regarding the severe impact that a cyberattack on assets not critical to the Bulk-Power System could still have on the public. The Commission, however, believes that its authority under section 215 of the FPA [Federal Power Act] does not extend to other infrastructure. Section 215 of the FPA authorizes the Commission to approve Reliability Standards that ‘provide for the reliable operation of the bulk-power system,’ defined by the statute as the facilities and control systems necessary for operation of an interconnected electric energy transmission network and the electric energy needed to maintain transmission system reliability. In addition, section 215(a)(1) specifically excludes from the definition of Bulk-Power System ‘facilities used in the local distribution of electric energy.'”
The most significant change in behavior attempted by FERC involved the matter of trust, saying: “The Commission proposed in the CIP NOPR to direct the ERO to modify Reliability Standard CIP-003-1 to provide direction on the issues and concerns that a mutual distrust posture must address to protect a control system from the ‘outside world.’ The Commission noted that interconnected control-system networks are susceptible to infiltration by a cyber intruder and that responsible entities should protect themselves from whatever is outside their control systems… The Commission noted that a mutual distrust posture requires each responsible entity that has identified critical cyber assets to protect itself and not trust any communication crossing an electronic security perimeter, regardless of where that communication originates… Mutual distrust does not imply refusal to communicate; it means the exercise of appropriate skepticism when communicating. The Commission believes additional guidance on what this means specifically in current practice would help responsible entities to avoid these misunderstandings… The Commission therefore directs the ERO to provide guidance, regarding the issues and concerns that a mutual distrust posture must address in order to protect a responsible entity’s control system from the outside world.”
Such injunctions amount to saying that from here on you must take seriously cyber and other attack threats to reliability, and not ignore them when inconvenient. While it is still too soon to know how effective this new approach to infrastructure cybersecurity will be, one conclusion is that even in a strongly deregulatory environment, regulatory bodies can provide legal handles on cybersecurity in regulated entities otherwise lacking in most other parts of the cyber commons.
A last-resort approach by a regulated entity seeking to minimize the effect of regulation is to minimize its domain of applicability by excluding from the FERC order as much of the generation, transmission, and distribution assets as they can get away with by declaring them non-critical. This is, of necessity, a continuing area of contention, as new technology is adopted and new energy needs are identified.
A recent study by the Center for Strategic and International Studies also considered whether effective cyber defense can be provided by current methods or whether fundamentally different approaches must be explored.12 Sponsored by the House Homeland Security Subcommittee on Emerging Threats, Cyber Security and Science and Technology, it made two proposals—regulation and identity management—that have long been sidestepped or rejected by most groups dealing with the problem. It said: “We believe cyberspace cannot be secured without regulation.” Of its 25 recommendations, six related to actions that should be required of infrastructures overseen by regulatory agencies or the authentication practices required of critical infrastructures, including: allowing consumers to use government-issued identity credentials; requiring all businesses to adopt a risk-based approach to credentialing; and encouraging risk-based processes over specific prescriptions.
The proposal concerning regulation of digital identities would eliminate anonymity from users in order to facilitate accountability for actions in the cyber commons. This is no different from identifying taxpayers or displaying a license plate on a vehicle. However, the downside could be elimination of the use of the net for political protest, an otherwise important benefit. This could be addressed by providing for unlicensed users, not unlike how unlicensed electromagnetic spectrum is allocated, with the understanding that no liability would be incurred by and no accountability would be expected of its users. Acceptance of communications from unlicensed users would be at the receivers’ risk.
Regulation is necessary for protecting important parts of the cyber commons and a necessary tool for protectors. But one must recognize that the entities so regulated will accept it only after avoiding it through every possible legal and political channel available to them.
National strategies. Another necessary government role in protecting the commons goes beyond protection of their own internal users and computers. This is a national leadership role enabling and coordinating private actions. Governments also play an implementation role in proposing legislation, enforcing mandates, and protecting users of the commons too small or weak to function effectively on their own behalf.10
While the U.S. government relies on public-private partnerships to achieve many of its goals, the degree to which network security is worsening suggests the need for new mechanisms. Since commercial organizations see computer security as a cost and do not value the corresponding benefit, private efforts have to date been insufficient. Both sides of the partnership are failing to stem the tide of abuse of the commons.7
Efforts by President Barack Obama and his Administration suggest this posture may be changing. In 2009 remarks, Melissa Hathaway, then acting senior director for cyberspace at the National Security Council, representing the National Security and Homeland Security Councils, said, “The Federal government cannot entirely delegate or abrogate its role in securing the nation from a cyber incident or accident. The Federal government has the responsibility to protect and defend the country, and all levels of government have the responsibility to ensure the safety and well-being of citizens.”6
Though government leadership is necessary for protecting the nation from cyber abuse, it is indirect, with much distance between government-strategy documents and demonstrable security.
International mechanisms. Cyber abusers and their victims can be in different sovereign jurisdictions. Actions against violators are supported by common standards of unacceptable behavior. Rationalizing laws globally makes sense but is time consuming and eventually limited by the speed each country adapts to new technical, economic, and political circumstances.
For international agreement to be effective, implementing mechanisms are needed for accommodating changes suggested by evolving needs: monitoring compliance by the signatories to maintain their trust and confidence; enforcing the agreement should signatories depart from agreed-upon norms; resolving disputes among the signatories; addressing technical issues of definitions, standards, and forensic collection; and rendering assistance to signatories to respond to technical challenges expeditiously. However, this process is also slow, as diverse signatories must be convinced they need to take action.
While many protective steps can be taken without formal agreement, if global changes in security are to be achieved, a larger international framework will be necessary for facilitating cooperation among signatories; drawing from common international contexts, Sofaer and Goodman13 discussed elements of such a framework.
As with the previous three dimensions of a framework for cybersecurity, international organizations have a role to play but, like regulation and government strategy, find it difficult to respond to the needs posed by a dynamic technology environment and aggressive and quick learners among those who would abuse the commons.
Technology to limit abuse. The view of many is that today’s lack of security of the commons and its information is no more than a bump on the road of technical progress, fixable by layering on more and better technology. Using technology to fix technology is questionable as a response to a problem with roots deep in the growing complexity of the worldwide network.
Were technology to change more slowly, such an approach might have a chance of success. Problems arise when unexpected coupling between parts of large computer-based networks of logical processes exhibit behavior that, while following precisely from their programmed logic, cannot be completely anticipated. Large networked systems have so many internal states they can never all be exhaustively tested, and proving their security appears unlikely.
Technology creates new power through enhanced performance in terms of size, speed, bandwidth, capacity, connectivity, and functionality, but, even as it “fixes” old problems and improves functionality, the technology creates new problems, embedding them deeply within unverifiable systems. The matter is one of relative rates of change. If problems are fixed more quickly than new problems are created, one can imagine achieving a stable balance. But when new technology introduces new problems more quickly than it fixes old ones, the resulting divergent situation defies control.
Malevolence threatening the cyber commons introduces a new rate-of-change parameter. Attackers quickly reverse-engineer security alerts and patches to exploit related flaws before defenders can eliminate them. The defender fix-install rate must be faster than the attacker reverse-engineering rate.
Cloud computing is a current example of technological exuberance. Users are encouraged to move their information and applications from machines under their direct inspection and potential control and which could conceivably become adequately secure into a “cloud” of networked computers of unknown ownership, location, management, and security. Should users enquire of the cloud’s gatekeepers about such matters, they are told to “trust us,” though one can hardly refrain from asking, “But why should I?”
Technology is an enabler for the first three necessary components of protection of the commons but like the others is insufficient. It is both part of the problem and part of the solution. Most important, behavioral adjustments by users of the commons are also needed to break the cycle of self-destructive technology:
Connections. Users should revisit the premise that any two devices are better connected than unconnected;
Conceptual errors. Managers should recognize that entrusting the fixing of flaws to the people who created them has natural limits, and that, perhaps, the security problem is not a matter of minor execution errors but of major conceptual errors;
Any computer. Decision makers should recognize that any computer can be penetrated, just as any building can be entered and any object can be stolen; and
Distrust as default. All users are well advised to replace trust with distrust as a default condition in all computer-mediated interactions.
These should not necessarily deter technical innovation but call for adjustment in the expectations of managers and users of the technologies they adopt.
Bottom-Up Perspective
Voluntary legal user-controlled, self-defense efforts are also necessary but inherently on a smaller scale than their governmental counterparts. They are most easily accomplished when user organizations are large enough and smart enough to identify and implement cost-effective protection. They help establish a market for protection technologies and educate a new generation of security professionals who understand options and risks that often remain classified or proprietary and are difficult to share widely.
Voluntary self defense asks: Who does the volunteering and the defending? The answer depends on the technical knowledge available to users and the resources they can devote to something that is not their professional focus. The newly emerging popularity of informal social networks points to an alternative to top-down processes.
One must recognize that entities so regulated will accept it only after they have avoided it through every possible legal and political channel available to them.
Voluntary user-oriented mechanisms (such as the Internet Engineering Task Force, or IETF) have served the Internet well, developing protocols to provide greater security and fostering next-generation networks.9 Computer emergency response teams (CERTs), industry-information-sharing-and-analysis centers (ISACs), informal regional system-administrator groups, software vendors, and the Forum of Incident Response and Security Teams (FIRST) all help but have difficulty staying ahead of aggressive attackers.
How can voluntary defense establish a trust mechanism? The seeds of today’s Internet security problems were planted when the ARPANET began to grow beyond its first small circle of researchers more than 40 years ago.8 Early generations of network users were homogeneous, scientifically oriented, cooperative, dedicated to developing network technology and its applications, and had no reason to distrust or harm one another. With net growth has come many more users with no knowledge of one another and with divergent agendas. Distrust should replace trust, but the means of practicing distrust are poorly served by network technology created to support trusted users.
The National Strategy to Secure Cyberspace published in 2003 relied on the 1997 PCCIP principles: voluntary action, public-private partnerships, public awareness, international cooperation, and the central importance of critical infrastructure.14 It viewed cyberattacks as crimes for which, through due process, perpetrators would be identified, prosecuted, and punished. Vulnerabilities were to be reduced through an unending search for flaws and their elimination through decisions by vendors, service companies, and computer owners and operators. It presumed software flaws could be reduced over time to acceptable levels. The defensive concept was to distribute response capabilities to user organizations acting on their own behalf and in their own best interests.
The security problems experienced today are significantly greater than when PCCIP issued its recommendations. The fixes are not working.7 There is heavy reliance on government and foot-dragging over what organizations will be forced to do. Another factor is the deep-seated view that security goals cannot be achieved without significant federal R&D funding. While time has been devoted to negotiating treaties related to cybercrime, nations use the delay to strengthen their cyber-system penetration capabilities for intelligence collection and to develop the means for conducting cyberwar, aka “information operations.”
Law-enforcement paradigms do not address rapidly evolving threats well and fail under emergency circumstances. The prospect of zero-day attacks, enabled by current trends in viruses that evolve quickly and an aggressive malware industry, are relevant. Changes in the nature of zero-day threats, the uncountable vulnerabilities of systems, and the motivations of cyberattackers require warning systems to detect attacks with enough time to initiate protection responses. Protection must be managed in near-real time so at least some attackers are thwarted. However, real-time warning and response must be on a global rather than a local basis.
One possible way of doing this exploits the nature of self-organizing social networks, starting with the proposition that users have a role in leading efforts for their own protection, not simply accepting what others choose to do, or not do, on their behalf.
Social networks have two characteristics that mimic development of early networks: respond directly as participants perceive value, growing in directions and at rates determined by that value; and overhead costs, typically riding on the Internet, where users pay for access and where participating Web sites may be supported through advertising income. Some central management is needed to maintain the integrity of the social network. Illustrative of the informal yet resilient nature of such networks are Facebook rules to protect privacy, open source software, user-created wikis, and apps purchased from developers through commercial sites.
Commons Protection Union
Proposed here is what might be called a Commons Protection Union (CPU) or, perhaps, cyber “neighborhood watch,” to recognize attacks in real time and provide information to users or their service-provider proxies, enabling them to disconnect from parts of the commons to contain a “disturbance” until it can be analyzed for its origin and characteristics and systems restored to full connectivity. Since cybersecurity problems derive from connectivity, managing connectivity is likely part of the solution.
Operating such a function can be done more responsively than is possible when response to attacks is paced by the rate of adopting intergovernmental agreements and the implementation speed of national response agencies. A flexible, voluntary approach is required, free of contested mandates. Being open and voluntary, governments could participate in increasing their effectiveness to whatever degree they choose. Real-time event information from users, private security companies choosing to participate, and such public information as governments choose to contribute could enable distributed examination of malware and attacks and provide information to participants for quick analysis.
The arrangement would make attack and ongoing probe information available for the common good, the essence of a commons. On the basis of such real-time information, participating users could take such defensive actions as they choose; for example, they could reduce load, route around congestion, disconnect from parts of the net, collect and preserve forensic information, and increase their hardness level, depending on their assessment of the real-time threat level and the criticality of their operations.
Carriers and Internet service providers do some of this. The new elements would be voluntary sharing, global real-time data provided to users or their proxies, and trusted third parties as consolidators. The high-level nature of the traffic monitoring can be designed to yield statistical measures for automated diagnostics and decision making while respecting the privacy constraints placed on the information by its contributors. Global traffic monitoring would include parameters to assess flow pathologies and detect anomalous patterns. What is proposed is not unlike a missile-launch-detection-and-tracking system but in which the defensive components are distributed and under user control.
How might such an addition to the computer- and network-security environment be brought about? The same way many activities on the Internet begin; someone creates something of value, and it spreads without prodding. Such an approach can potentially spread at the Internet speed of social networks rather than at government speed. As outlined in Figure 2, the upper-left oval represents the Internet, with legitimate users dealing with other legitimate users, but, now, malicious users inject themselves into it, only masquerading as legitimate users. The CPU is authorized to receive the externals of such traffic, as the voluntary users have authorized; these data streams are analyzed through the voluntary actions of those participating in the CPU social network for anomalies that can indicate a cyberattack or preparation for a cyberattack. The members of the CPU network send statistical information or alerts of varying degrees of urgency to its contributors who are then able to initiate defensive responses, depending on the nature of their information to be protected and the criticality of their operations.
The process is characterized by various operational and business models, several supported by distributed agents. Consolidation and analysis centers CAC(s) would receive traffic externals from user sources, including infrastructure operators and other organized entities. They would also receive hierarchically processed flows (such as EROs) for parts of the power infrastructure, nodes in upper levels of communication systems, feeds from CERTs, network-security companies, and, most important, private and small-business users. Governments are likely to have their own systems for their needs but could participate with filtered flows should they choose. The CAC(s) could provide near-real-time alerts and network status reports to users, with lengthier analyses following as more data is analyzed.
CAC(s) might be organized as a not-for-profit corporation supported by user consortia consisting of network-affinity groups, possibly as a subscription service with various levels of timeliness and depth of analysis. Amateurs perform similar services, including ham-radio operators in emergencies, astronomers searching for asteroids, and gamers exploring approaches to protein folding. It could be a research operation studying network dynamics while also providing a real-time product, an objective that would also provide useful guidance for research. Output data could be used as a basis for for-profit value-added services. There is even a civil-defense aspect governments might support.
The basic governance principle, as with the IETF, would be openness, rough consensus, and running code to be improved collectively over time.
Following any of the paths outlined here, a social-network-based CPU will develop in directions its users feel provide value. Existing social networks (such as Facebook, Twitter, blogs, and wikis) could provide marketing and distribution channels.
Further issues will also have to be addressed, as with any user-controlled network. Participants have to choose between privacy and the degree to which the network demonstrably improves their protection. The CPU’s own protection is necessary to prevent it being manipulated by the abusers whose activities it seeks to mitigate. A CPU could also give network abusers feedback on the effectiveness of their attacks, but attackers already know the responses being taken by software providers and security vendors.
The voluntary technical contributions needed for its operation will have to be forthcoming from the participant community. The degree to which a CPU competes against the security products of its commercial participants will have to be balanced against the benefits they would receive.
It may be that the most capable and dedicated security innovators are found in the same research community that formed the basis for the ARPANET. Such an experiment would be worth trying.
Acknowledgments
I benefitted greatly from my discussions on improving cybersecurity with Seymour E. Goodman and Anthony M. Rutkowski. This study is based on a grant from Science Applications International Corporation to The Center for International Security, Technology, and Policy at the Georgia Institute of Technology.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment