News
Artificial Intelligence and Machine Learning News

Can We Trust Autonomous Weapons?

Nations consider using defense systems that can make their own lethal decisions.
Posted
Can We Trust Autonomous Weapons? illustration
  1. Article
  2. Author
Can We Trust Autonomous Weapons? illustration

Most reasonable people can see the benefits of using fully autonomous systems, particularly to help prevent injuries or death, as is the case with advanced driver assistance systems increasingly found in automobiles. When it comes to autonomous systems that are designed to take life rather than preserve it, there is significantly more debate.

Currently, the U.S. and other nations do not have any weapons systems that can operate fully autonomously, which is defined in military parlance as selecting, aiming, and firing at a target without a human being “in the loop,” or somehow in control of the weapon system. However, a variety of military weapons systems operate semiautonomously, requiring some human control or input to select or choose targets, but relying on pre-programmed algorithms to execute a strike.

A good example of this is the Lockheed Martin Long Range Anti-Ship Missile (LRASM) system, slated to enter service in the U.S. defense system within the next two years. The LRASM can be fired from a ship or plane and autonomously travel through the air, avoiding obstacles outside of the target area. Published reports indicate humans choose and program the algorithms to seek out and identify potential targets, thus keeping a human in the loop. While the exact factors that make up the target selection algorithm are classified, it is likely a weighting of elements such as the target’s size, location, radar signature, heat profile, or other elements that positively identify the target.

Another example of a system with semiautonomous capabilities is Samsung’s SGR-A1, a military border sentry robot in development for deployment on the border between North and South Korea. Essentially an unarmed guard tower, the system is designed to assist border guards by scanning the area for those who might try to cross the border. The system is armed with a light machine gun that can dispense tear gas or rubber bullets, and is equipped with cameras, a laser range finder, and a pattern recognition algorithm designed to discern between people and animals. Currently, the system is designed to be operated under human control for target verification, though developers have given it the capability to use its sensors to detect, select, and shoot at targets autonomously.

It is the last capability that has watchdogs worried. Systems such as LRASM and the SGR-A1 are now only approved for use with a human approving targets to be killed by the system, but there is considerable concern the U.S. and other world powers are on the fast track to developing machines able to kill people independently.

“I think it’s pretty clear that military mastery in the 21st century is going to depend heavily on the skillful blending of humans and intelligent machines,” says John Arquilla, professor and chair of Department of Defense Analysis at the U.S. Naval Postgraduate School in Monterey, CA. “It is no surprise that many advanced militaries are investing substantially in this field.”

Indeed, in late 2014, U.S. Secretary of Defense Ash Carter unveiled the country’s so-called “Third Offset” strategy, essentially an attempt to offset the shrinking U.S. military force by incorporating technologies that improve the efficiency and effectiveness of weapons systems. While many of the specific aspects of the strategy are classified, industry observers agree a key tenet is increasing the level of autonomy in weapons systems, which will improve warfighting capability and reduce the number of humans required to operate weapons systems.

“For over a decade, we were the big guy on the block,” explains Major General (Ret.) Robert H. Latiff, an adjunct professor at the Reilly Center for Science, Technology, and Values at the University of Notre Dame, and Values Research Professor and an adjunct professor at George Mason University. “But with the emergence of China, and the reemergence of Russia, both of whom are very, very technically capable, and both of whom have invested fairly significantly in these same technologies, it goes without saying that the DoD (the U.S. Department of Defense) feels like they need to do this just to keep up.”

Military guidelines published by the U.S. Department of Defense in 2012 do not completely prohibit the development and use of autonomous weapons, but require Pentagon officials to oversee their use. That is why human rights groups such as the Campaign to Stop Killer Robots are actively lobbying the international community to impose a ban on the development and use of autonomous weapons systems.

“We see a lot of investment happening in weapons systems with various levels of autonomy in them, and that was the whole reason why we decided at Human Rights Watch back in 2012 to look at this,” explains Mary Wareham, advocacy director of the Arms Division of the Human Rights Watch, and coordinator of the Campaign to Stop Killer Robots. “We’re seeking a preemptive ban on the development, production, and use of fully autonomous weapons systems in the United States and around the world.”

“We’re focusing quite narrowly on the point at which critical functions of the weapons system become autonomous,” Wareham says. “The critical functions that matter to us are the selection and identification of the target, and the use of force.” Wareham’s main issue is that the fully autonomous weapons systems of the future may rely solely on algorithms to target and kill enemy targets, without a human in the loop to verify the system has made the right decision.

Human Rights Watch is pushing to have a negotiated international treaty limiting, restricting, or prohibiting the use of autonomous weapons systems written and ratified within the next two or three years. “If this becomes [a decade-long process], then we’re in big trouble,” she admits, noting that at present, there are no such treaties in process within the international community. “At the moment, it is just talk,” Wareham acknowledges.

A key argument of groups such as Human Rights Watch is that these systems, driven by algorithms, may make mistakes in target identification, or may not be able to be recalled once deployed, even if the scenario changes. Others with military experience point out that focusing on the potential for mistakes when using fully autonomous weapons systems ignores the realities of warfighting.


“We’re seeking a preemptive ban on the development, production, and use of fully autonomous weapons systems in the United States and around the world.”


“I think one of the problems in the discourse is the objection that a robot might accidentally kill the wrong person, or strike the wrong target,” Arquilla says. “The way to address is this is to point out that in a war, there will always be accidents where the innocents are killed. This has been true for millennia, it is true now, and it is true with all of the humans killed in the Medecins Sans Frontieres hospital in Afghanistan.”

Arquilla adds that while the use of artificial intelligence in weapons will not eliminate mistakes, “Autonomous weapons systems will make fewer mistakes. They don’t get tired, they don’t get angry and look for payback, they don’t suffer from the motivated and cognitive psychological biases that often lead to error in complex military environments.”

Furthermore, military experts feel an outright ban would be impossible to enforce due to the secretive nature of most militaries, and likely would not be in the best interest of any military group or nation.

Indeed, even with today’s military technologies, getting the military or its contractors to discuss the exact algorithms used to acquire, select, and discharge a weapon is difficult, as disclosing this information would put them at a distinct tactical disadvantage. Therefore, even if a ban were to be put in place, devising an inspections system similar to those used for chemical and anti-personnel mines would be extremely complicated.

“A ban is typically only as good as the people who abide by it,” Latiff says, noting that those who will sign and fully abide by a ban make up “a pretty small fraction of the rest of the world.” In practice, he says, “When something becomes illegal, everything just goes underground. It’s almost a counterproductive thing.”

Work on autonomous weapons system has been going on for years, and experts insist expecting militaries to stop developing new weapons systems that might provide an advantage is foolhardy and unrealistic. As such, “There is absolutely an arms race in autonomous systems underway,” Arquilla says. “We see this in both countries that are American allies, and also among potential adversaries. In particular, the Russians have made great progress. So have the British; they are putting together a fighter plane that can do everything a piloted fighter plane can, and can be built to higher performance characteristics, because you don’t have a human squeezed by G-forces in the cockpit.”

Others agree, even if they admit that at present, there are no significant advantages to using fully autonomous weapons, versus the semiautonomous systems already in use.

“I would imagine that [as autonomous weapons] become more capable, they will be seen to operate more effectively than systems with humans in the loop,” says Lieutenant Colonel Michael Saxon, an assistant professor teaching philosophy at the U.S. Military Academy at West Point. “Once you introduce swarms or you have to respond to adversary systems that are autonomous, humans in the loop will create real disadvantages. This, of course, is all predicated on advances in these machines’ capabilities.”

Still, observers suggest while a ban on autonomous weapons may not be the right course of action, a deliberate approach to developing and incorporating them into the military arsenal is prudent.

“I think we’re probably closer to the kind of capabilities we’re talking about than most people think, and Russia and China are, too,” Latiff says. “These are dangerous systems. In the wrong hands, these things could really be awful.”

A ban on autonomous weapons would likely have little impact on the development of weapons systems in the near future, which still will be overseen by humans, even if the actual decision to select and strike a target is made by an autonomous system.

“An autonomous weapon operates without human control, but that does not mean that it is free from human input,” Arquilla says. “There are a lot of elements to the chain [of command]; the unfortunate term is the ‘kill chain.’ There will be people and machines intermixed within the chain.”

Whether or not a ban is put into place, the international community is likely to be faced with significant moral and legal questions surrounding the use of autonomous weapons, and whether they will be developed in ways that are consistent with accepted ideas about ethics and war, Saxon says.


“Once you introduce swarms or you have to respond to adversary systems that are autonomous, humans in the loop will create real disadvantages.”


“There are good arguments about increased moral hazard with autonomous weapons systems, that they make killing too easy,” Saxon says. “I think they also have an effect on traditional military virtues that need to be examined. What does it mean to be courageous, for instance, when your machines take the risks and do the killing?”

For Latiff’s part, while he does not support a ban on autonomous weapons, he would support a non-proliferation treaty allowing militaries to research and test these systems to ensure they can be made as reliable and safe as possible.

“At the end of the day, it’s kind of like nuclear weapons,” Latiff says. “Everybody’s going to get them, and the people that don’t get them are going to want them. The best we can hope for is that we slow it down.”

*  Further Reading

U.S. Department of Defense 2012 Directive on Autonomous Weapons: http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf

Campaign to Stop Killer Robots: https://www.stopkillerrobots.org/

Video: Scary Future Military Weapons Of War-Full Documentary: https://www.youtube.com/watch?v=DDJHYEdKCBE

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More