News
Artificial Intelligence and Machine Learning

AI Researchers Call For Ban on Autonomous Weapons

Posted
The UN flag.
Artificial Intelligence and robotics researchers are asking the United Nations to support a ban on lethal autonomous weapons systems.

More than 1,500 artificial intelligence (AI) and robotics researchers, including visionaries such as Elon Musk and Stephen Hawking, have signed an open letter urging the United Nations (UN) to support a ban on lethal autonomous weapons systems.

The letter follows the April 2015 meeting of the Convention on Conventional Weapons (CCW), which was held at the UN’s Palais des Nations in Geneva.

The letter notes that, although in the near future it will be possible to develop and deploy weapons that can be operated autonomously beyond the means of human control, it is in the best interest of the scientific community—and the global community at large—to restrict such an arms race. The letter was submitted today (July 27) to "help move the UN talks along," according to Toby Walsh, AI expert at NICTA (National ICT Australia), Australia’s largest research organization.

"We hope it will spur the UN to get a ban in place," Walsh says. "There is such a ban with blinding lasers, and no arms company around the world as a result sells such technology. A similar result with lethal autonomous weapons would be great."

The language of the ban that is sought is quite deliberate, according to Stuart Russell, professor of computer science at the University of California, Berkeley, and one of the primary authors of the letter. Russell explains the letter urges a ban on autonomous lethal weapons, but it does not bind the signatories beyond that. "Theoretically it could be morally consistent to work on developing AI weapons (for example, to defend one’s country against a numerically superior enemy’s unprovoked AI attacks) while arguing for a ban," Russell says. "There is no active discussion of a treaty banning smart anti-missile missiles, for example, as they don’t kill anyone."

However, what actually constitutes an "autonomous" weapon is difficult to define, according to Walsh. Indeed, that is why the letter fails to define the parameters surrounding AI and autonomy. "By not being precise now, we leave open room for the bargaining that inevitably takes place at the last minute in any diplomatic negotiations," Walsh explains. "At the end of the day, the diplomats will take a very pragmatic view as to what is meant by autonomous."

"There are many shades of grey," Walsh adds. "It could be argued that autonomous weapons are already here. For instance, "fire and forget" sounds a lot like autonomy. This makes it a hard problem to define autonomy."

AI researchers indicate there are numerous ways in which AI can make battlefields safer for humans without simply being used as killing technology. Researchers highlight technologies in use today, such as robotic mine clearing, and the automatic machine translation that allows soldiers to communicate with civilians caught on the battlefield to help direct them to safety.

AI algorithms can also do a better and more accurate job of target identification, with humans still making the final call on firing any weapons. This will help prevent incidents such as the USS Vincennes mistakenly shooting down Iran Air Flight 655 back in 1988, according to Walsh.

Beyond moral or ethical issues, the biggest reason to call for such a ban is to prevent a future AI arms race. The open letter notes that "if any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow," referring to the series of automatic rifles that have become ubiquitous on the battlefield.

Preventing such an arms race must be undertaken now, according to Mark Gubrud, assistant professor in the Peace, War & Defense Curriculum at the University of North Carolina, a signatory of the letter. "It’s not just because the robots might make a mistake," Gubrud says. "We’re getting into an arms race that going to lead us into some very dangerous places," noting that if the development of AI weaponry grows unchecked, it could launch the world into a new Cold War, where the threat of annihilation remains real.

Even if the UN ultimately passes a ban on autonomous weapons, there is always the threat of rogue groups, such as ISIS, as well as non-UN nations such as North Korea, being able to acquire or create AI weapons. Gubrud, however, dismisses the threat of small actors, and says the primary threat remains with world superpowers acquiring a stockpile of AI weapons.

"We’ve got to get away from these James Bond cartoon fantasies where somebody develops a superweapon, and takes over the world with it," Gubrud says. "That’s not realistic. Autonomy is going to be a force multiplier of order unity, but it’s not going to be able to enable North Korea to defeat the United States militarily."

Keith Kirkpatrick is principal of 4K Research & Consulting, LLC, based in Lynbrook, NY.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More