Opinion
Artificial Intelligence and Machine Learning Letters to the Editor

ACM Moral Imperatives vs. Lethal Autonomous Weapons

Posted
  1. Introduction
  2. Author's Response:
  3. Technological Superiority Lowers the Barrier to Waging War
  4. Author's Response:
  5. Braces Considered Loopy
  6. Programming By Any Other Name
  7. Decidability Does Not Presuppose Gödel-Completeness
  8. Author's Response:
  9. References
Letters to the Editor, illustration

Moshe y. vardi’s editor’s letter “On Lethal Autonomous Weapons” (Dec. 2015) said artificial intelligence is already found in a wide variety of military applications, the concept of “autonomy” is vague, and it is near impossible to determine the cause of lethal actions on the battlefield. It described as “fundamentally vague” Stephen Goose’s ethical line in his Point side of the Point/Counterpoint debate “The Case for Banning Killer Robots” in the same issue. I concur with Vardi that the issue of a ban on such technology is important for the computing research community but think the answer to his philosophical logjam is readily available in the “ACM Code of Ethics and Professional Conduct” (http://www.acm.org/about-acm/acm-code-of-ethics-and-pro-fessional-conduct), particularly its first two “moral imperatives”—”Contribute to society and human well-being” and “Avoid harm to others.” I encourage all ACM members to read or re-read them and consider if they themselves should be working on lethal autonomous weapons or even on any kind of weapon.

Ronald Arkin’s Counterpoint was optimistic regarding robots’ ability to “… exceed human moral performance …,” writing that a ban on autonomous weapons “… ignores the moral imperative to use technology to reduce the atrocities and mistakes that human warfighters make.” This analysis involved two main problems. First, Arkin tacitly assumed autonomous weapons will be used only by benevolent forces, and the “moral performance” of such weapons is incorruptible by those deploying them. The falsity of these assumptions is itself a strong argument for banning such weapons in the first place. Second, the reasons he cited in favor of weaponized autonomous robots are equally valid for a simpler and more sensible proposal—autonomous safeguards on human-controlled weapons systems.

What Arkin did not say was why the world even needs weaponized robots that are autonomous. To do so, I suggest he first conduct a survey among the core stakeholder group he identified—civilian victims of war crimes—to find out what they think.

Bjarte M. Østvold, Oslo, Norway

Back to Top

Author’s Response:

The desire to eliminate war is an old one, but war is unlikely to disappear in the near future. “Just War” theory postulates that war, while terrible, is not always the worst option. As much as we may wish it, information technology will not get an exemption from military applications.

Moshe Y. Vardi, Editor-in-Chief

Back to Top

Technological Superiority Lowers the Barrier to Waging War

I am writing to express dismay at the argument by Ronald Arkin in his Counterpoint in the Point/Counterpoint section “The Case for Banning Killer Robots” (Dec. 2015) on the proposed ban on lethal autonomous weapons systems. Arkin’s piece was replete with high-minded moral concern for the “… status quo with respect to innocent civilian casualties …” [italics in original], the depressing history of human behavior on the battlefield, and, of course, for “… our young men and women in the battlespace … placed into situations where no human has ever been designed to function.” There was an incongruity in Arkin’s position only imperfectly disguised by these sentiments. While deploring the “… regular commission of atrocities …,” in warfare, there was nowhere in Arkin’s Counterpoint (nor, to my knowledge, anywhere in his extensive writings) any corresponding statement deploring the actions of the U.S. President and his advisors, who, in 2003, through reliance on the technological superiority they commanded, placed U.S. armed forces in the situations that gave us, helter-skelter, the images of tens of thousands of innocent civilian casualties, many thousands of men and women combatants returning home mutilated or psychologically damaged, and the horrors of Abu Ghraib military prison.

Is it still surprising that an enemy subject to the “magic” of advanced weapons technology would resort to the brutal minimalist measures of asymmetric warfare, and combatants who see their comrades maimed and killed by these means sometimes resort to the behavior Arkin deplores?

In the face of clear evidence that technological superiority lowers the barrier to waging war, Arkin proposed the technologist’s dream—weapons systems engineered with an ethical governor to “… outperform humans with respect to international humanitarian law (IHL) in warfare (that is, be more humane) …” Perfect! Lower the barrier to war even further, reducing consideration of harm and loss to one’s own armed forces—at the same time representing it as a gentleman’s war, waged at the highest ethical level.

Above all, I reject Arkin’s use of the word “humane” in this context. My old dictionary in two volumes1 gives this definition:

Humane—”Having or showing the feelings befitting a man, esp. with respect to other human beings or to the lower animals; characterized by tenderness and compassion for the suffering or distressed.”

Those, like Arkin, who speak of “ethical governors” implemented in software, or of robots behaving more “humanely” than humans are engaging in a form of semantic sleight of hand the ultimate consequence of which is to debase the deep meaning of words and reduce human feeling, compassion, and judgment to nothing more than the result of a computation. Far from fulfilling, as Arkin wrote, “… our responsibility as scientists to look for effective ways to reduce man’s inhumanity to ma n through technology …,” this is a mockery and a betrayal of our humanity.

William M. Fleischman, Villanova, PA

Back to Top

Author’s Response:

While Fleischman questions my motive, I contend it is based solely on the right to life being lost by civilians in current battlefield situations. His jus ad bellum argument, lowering the threshold of warfare, is common and deserves to be addressed. The lowering of the threshold of warfare holds for the development of any asymmetric warfare technology—robotics is just one—that provides a one-sided advantage, as one might see in, say, cyberwarfare. Yes, it could encourage adventurism. The solution then is to stop all research into advanced military technology. If Fleischman can make this happen I would admire him for it. But in the meantime we must protect civilians better than we do, and technology can, must, and should be applied toward this end.

Ronald C. Arkin, Atlanta, GA

Back to Top

Braces Considered Loopy

The “naked braces” discussion, beginning with A. Frank Ackerman’s letter to the editor “Ban ‘Naked’ Braces!” (Oct. 2015), perhaps misses the forest for the trees, as a major reason for deeply nested expressions is the inability of most programming languages to handle arrays without looping. This shortcoming further compounds itself by contributing to the verbosity of the boilerplate required for such looping (and multi-conditional) constructs.

Jamie Hale’s proposed solution in his letter to the editor “Hold the Braces and Simplify Your Code” (Jan. 2016)—including “… small and minimally nested blocks …”—to the issue first raised by Ackerman pointed in a good direction but may remain lost in the forest of intrinsically scalar languages. Small blocks of code are good, but, in most languages, doing so merely results in a plethora of small blocks, pushing the complexity to a higher level without necessarily reducing it.

A more functional, array-based way of looking at problems can, however, reduce that apparent complexity by treating collections of objects en masse at a higher level.

Given most programmers’ lack of familiarity with array-oriented programming, it is difficult for anyone, including me, to provide a widely comprehensible pseudocode example of what I mean by this, but consider the following attempt, based on the problem of invoking different code based on transactionsize breakpoints (where “transactions” is a vector of transaction sizes)

ins01.gif

Ignoring the length discrepancy between the number of functions provided and the ostensible shape of the Boolean condition on which their selection is based, such a construct could easily be extended to additional breakpoints with something like this

ins02.gif

For anyone interested in wrestling with a specific example of an array-based functional notation guiding my thoughts on this example, see http://code.jsoft-ware.com/wiki/Vocabulary/atdot.

Devon McCormick, New York, NY

Back to Top

Programming By Any Other Name

Thomas Haigh’s and Mark Priestley’s Viewpoint “Where Code Comes From: Architectures of Automatic Control from Babbage to Algol” (Jan. 2016) focused on the words “code” and “programming” and how they came to be defined as they are today. However, it also mentioned other types of programming from the days before those words took their current meaning, without acknowledging they were exactly the same in the minds of those scientists and “card jockeys” who diagrammed analog computers or charted the progress of a job on the “data processing” floor and wired the plugboards of the unit record equipment on that floor. If no scholar has in fact published a looking-backward article on the plugboard wiring of those machines from the modern programmer’s perspective, someone should. If you have never wired a plugboard, I urge you to try it. Teach yourself to sense a pulse and make something useful happen or debug a problem when someone dislodges a cable. Once you understand the machine, you will find you step immediately into programming mode, whereby the cable is the code, the plugboard the subroutine, and the floor the program. Drawing flow diagrams was, once upon a time, what programming was about, no matter what the target environment happened to be.

The only programmer I ever met who coded a significant production program on a UNIVAC SSII 80 (circa 1963) computer and saw it run successfully on its first shot, was an old plugboard master. He flowcharted the program the way he learned to flowchart a machine-room job. The “concept” of programming was nothing new to him.

Ben Schwartz, Byram Township, NJ

Back to Top

Decidability Does Not Presuppose Gödel-Completeness

Contrary to what Philip Wadler suggested in his otherwise interesting and informative article “Propositions as Types” (Dec. 2015, page 76, middle column, third paragraph), the algorithmic decidability of an axiomatically defined theory, say, T (such as Set Theory, as in Hilbert’s concern) does not presuppose the negation- or “Gödel-” completeness (not to be confused with the semantic completeness) of T. First, negation-completeness does not imply algorithmic decidability without further ado, and second, negation-incompleteness does not imply algorithmic undecidability. The negation-completeness of T does indeed imply the algorithmic decidability of T if the set of axioms of T is algorithmically decidable and T is consistent (recall Recursion Theory), and there are theories that are negation-incomplete but algorithmically decidable (such as temporal-logical theories), respectively.2

Simon Kramer, Lausanne, Switzerland

Back to Top

Author’s Response:

Thank you to Simon Kramer for clarifying the relation between completeness and decidability. The word “presupposes” has two meanings: “require as a precondition of possibility or coherence” and “tacitly assume at the beginning of a line of argument or course of action that something is the case.” Kramer presupposes I mean the former, when in fact I mean the latter; my apologies for any confusion. The logics in question are consistent and have algorithmically decidable axioms and inference rules, so completeness indeed implies decidability.

Philip Wadler, Edinburgh, Scotland

Back to Top

    1. Emery, H.G. and Brewster, H.K., Eds. The New Century Dictionary of the English Language. D. Appleton-Century Company, New York, 1927.

    2. Kramer, S. Logic of negation-complete interactive proofs (formal theory of epistemic deciders). Electronic Notes in Theoretical Computer Science 300, 21 (Jan. 2014), 47–70, section 1.1.1.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More