BLOG@CACM
Computing Profession

On the Ethics of Cyberwar

Posted
Robin K. Hill, University of Wyoming

Ethics applied to warfare has always had to face the issue that though some recoil at formulation of what is just in war, most agree on what is unjust in war. International Humanitarian Law, or IHL [ICRC], though it obviously faces enforcement challenges, carries moral weight around the world. Commentators in this publication have addressed these most troubling of technical issues; see Wendell Wallach’s recent Viewpoint on lethal autonomous weapons [Wallach]. The Ethics + Emerging Sciences group at the California Polytechnic State University (Cal Poly) has taken on some of these questions, along with other aspects of new moral dilemmas.

Keith Abney, Senior Fellow of the Ethics + Emerging Sciences Group at Cal Poly offers an overview of their work in this area.


Keith Abney on the Ethics of Cyberwar

Our team at Cal Poly, in conjunction with scholars from Western Michigan and the Naval Postgraduate School, are finishing up an NSF grant studying the ethics of cyberconflict. Below are a few of major the questions that we tried to formulate; think of it as setting up a FAQ on the threat of cyberwar.

The most basic questions about cyberwar begin with the definitional: If we define ‘cyber’, following Oxford, as “relating to or characteristic of the culture of computers, information technology, and virtual reality”, and note its common use as a prefix; then after land, sea, air, and space, is cyber really the fifth domain of war—is it really different, truly novel [Economist]? While cyberattacks—the use of computers and IT to attack other computers and connected systems, e.g., the Stuxnet virus—clearly exist, when do such attacks rise to the level of war? This is termed the “red line” question.

For policymakers, however we answer the “red line” question, it seems clear that discussions of policy and possible regulation will begin with extant international law. Must IHL be completely redone to accommodate cyberattacks, or can the existing framework be extended to deal with it? If brand new IHL is needed, what would be examples of possible new cybernorms? The Tallinn Manual gives some idea of what may be in store, but Tallinn does not have the force of law. For example, the laws of war (as summarized in the LOAC blog LOAC) insist on a principle of distinction (or discrimination), that legitimate attacks make an attempt to distinguish combatants from noncombatants, and only intend to attack the former. In a just war, ‘collateral damage’ is supposed to be unintentional, and must also obey a principle of proportionality. So, if cyberwar can exist, does that mean that private engineers or IT employees become legal targets of attack, or liable to harm?

At the Ethics + Emerging Sciences Group, we have examined the ethical issues for private companies regarding cyberattacks (see http://ethics.calpoly.edu/hackingback.htm); especially the issue of impunity, in the absence of settled international law for dealing with such attacks. Given the “Wild West” situation for private enterprise and cyberattacks, could it be permissible for private enterprises to “hack back” at an adversary? Under what conditions? Do we need an international treaty, or change to IHL, to determine this?

The overlap between cyberconflict and other types of conflict that involve IT (especially robots) also deserves scrutiny. We have previously released a major report on the ethics of autonomous weapons systems, an issue which has spawned a vast and rapidly increasing literature over the past decade. One focus of that debate has been on the requirement for meaningful human control, as groups like the Campaign to Stop Killer Robots have insisted on prohibition of lethal robots not directly teleoperated by humans (i.e., a robot with no “man in the loop”). But no similar concerns have arisen over cyberconflict. Why? If meaningful human control is a big deal in the killer-robot debate, why should it not also be a serious concern about the ethics of cyberattacks? Does the fact of a robot body make a crucial moral difference in demanding human control?

Of course, the issue of meaningful human control best makes sense if one can determine who is in control of the attack. That brings us to a vexed issue in cyberconflicts—attribution, the identification of the attacker. If attribution is a morally crucial requirement for cyberwar, must we require (as a matter of policy) a technological fix that would enable reliable attribution? If that involves universal backdoors, does that mean attribution and privacy are necessarily at odds? Our fellow researcher on our NSF grant, Neil Rowe of the NPS [Rowe], has already suggested adding digital signatures to cyberattacks to enforce attribution, much as soldiers are expected to wear uniforms. Would that constitute a complete solution to this issue? If not, what more is needed? And even if it would solve the problem, is it actually feasible to implement?

Another issue of definitions is classifying the various kinds of cyberattacks. A common approach is termed CIA: all cyberattacks attempt to undermine a user’s confidentiality, integrity, or availability. But what set of conditions for a cyberattack ethically can support a declaration of war, and so constitute casus belli? The CIA question has immediate relevance: Would the “red line” that distinguishes a cyberwar from a mere cyberattack depend materially on the type of attack? That is, would the red line be the same for confidentiality attacks as for those compromising integrity or availability? Are there special issues involved in hacking robots or drones, particularly lethal autonomous weapons systems (LAWS), in order to turn cyberattacks into kinetic attacks? How would that change a redline analysis of when cyberattacks turn into cyberwar?

Will the rise of the ‘Internet of things’ and/or bodily implants (cyborgization) make botnets even more pervasive and hard to stop? Will there be issues in stopping botnets that include devices on which our activities of daily living depend (refrigerators, freezers, thermostats, medical devices such as pacemakers) that do not arise in the disabling of a laptop? Could a just war ever involve hacking a civilian’s home, or a private civilian business, in such a way as to adversely affect activities of daily living? Will those individuals in their homes, as opposed to states, or non-state groups, be the most important concern for future cyberattacks? Will the same strategies and responses be appropriate for each, or will we need radically different tactics that depend not only on the nature of the attack, but also the nature of the attacker?

New forms of cybertechnologies bring new ethical questions, but new problems will also arise when extant technologies are used in novel environments. So, what special concerns do cyberattacks in space raise? Do they further problematize the acceptance and use of spaceborne dual-use technologies, which could be used to launch cyberattacks that become space-originated kinetic attacks?

Back on Earth, the rise of social media and virtual identities raise novel concerns. Will personal and reputational cyberattacks become easier or harder to defend and fix? Could they ever rise to an act of war? What status will “virtual personas” have? Will hacking a virtual person ever be a crime analogous to an attack on a physically embodied person? And what new issues for liability, negligence, etc. arise from cyberattacks? Do they require further restrictions on freedom of speech, and new understanding of libel and defamation? Relatedly, what new limitations or guarantees of privacy will cyberattacks require? Is pervasive government surveillance acceptable? Would it be if accompanied by substantial sousveillance, watching from below via miniaturized digital technologies [Bollier]? Should online anonymity be allowed?

Cyberconflict may also cause the entire understanding of the appropriate public-private divide to change as well, challenging traditional economic and social models. Perhaps cybersecurity problems are examples of “market failures” that require state solutions (with possible implications for war between states as a result); perhaps, on the other hand, the free market will produce the solution. And, given the US dominance in Internet architecture and operation, should international treaties reinforce that dominance (like the UN Security Council permanent veto)? Or should we move to a more globally-representative system?

Finally, could artificial general intelligence (AGI) learn the wrong lessons, with potentially calamitous consequences, like creating a super-villain? Just imagine Microsoft’s Tay [Perez] in control of the world! Even if cyberattacks do not pose a moral and even existential threat to humans via AGI, could accepting them debase our character and culture in immoral, unacceptable ways, or does that question invite the sort of Luddite argument that goes nowhere?

These questions and many more need answers, as we launch into a brave new world in which our mutual connections and interdependence leave us vulnerable in ways unforeseen by previous legal scholars and policymakers. We will need to know which battles are worth fighting, and how we should fight them, in this new realm of cyberspace.


Thank you, Mr. Abney, for sharing your thoughts with us and also for helping to host the 2017 conference of the International Association for Computing and Philosophy.

Some issues of warfare cross the cyber or conventional realms, such as determining when interference or espionage becomes a casus belli (act of war). Some issues appear closely analogous, such as the circumstances under which software engineers and technology staff would properly be termed combatants in war compared to the similar question about munitions factory workers. These questions still seek answers in the realm of conventional warfare, let alone in the realm of cyberwarfare. They are joined by the questions described here, conscientiously pursued by the Ethics + Emerging Science group.

References

[Bollier] Bollier, David. 2013. Sousveillance as a Response to Surveillance. Online at http://www.bollier.org/blog/sousveillance-response-surveillance, 11/24/2013.

[Economist] Author unknown. 2010. War in the Fifth Dimension. The Economist Group Limited. 1 July 2010.

[IHRC] Advisory Service on International Humanitarian Law, International Committee of the Red Cross. 2004. What is Internaional Humanitarian Law?

[Lin] Lin, Patrick, Bekey, George, and Abney, Keith. 2008. Autonomous Military Robotics: Risk, Ethics, and Design. Report prepared for the US Department of Navy, Office of Naval Research.

[Perez] Perez, Sarah. 2016. Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism [Updated]. TechCrunch.

[Rowe] Rowe, Neil. 2009. The Ethics of Cyberweapons in Warfare. Report from the Center for Information Security.

[Wallach] Wallach, Wendell. 2017. Toward a Ban on Lethal Autonomous Weapons: Surmounting the Obstacles. CACM 60:5. May 2017. DOI:10.1145.2998579.

 

Robin K. Hill is adjunct professor in the Department of Philosophy, and in the Wyoming Institute for Humanities Research, of the University of Wyoming. She has been a member of ACM since 1978.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More