News
Artificial Intelligence and Machine Learning

Ethical Robots: Let Their Artificial Consciences Be Their Guides

Posted
Ronald C. Arkin
Georgia Tech professor Ronald C. Arkin

When lethal autonomous robots take their place on the field of battle and function independent of human control, what will govern the decisions of these next-gen weapons to either shoot or hold their fire?

If Ronald C. Arkin has anything to say about it, the software-based ethical architecture he and his Georgia Tech students have created will confine what he calls "humane-oids" to the same laws of war and rules of engagement prescribed for human warriors. But it’s not clear when the Defense of Department (DOD) intends to use his system–if ever.

In 2006, Arkin, who is the director of the Mobile Robot Lab and associate dean for research and space planning at Georgia Tech’s College of Computing, proposed a three-part project to the U.S. Army Research Office for which he received a three-year, $300,000 grant.

The first year involved a survey of military people, roboticists, and public policy makers to get an understanding of what they felt about the potential use of lethal autonomous robots. The second year dealt with system design. The third called for implementing and testing the proof-of-concept architecture.

"But it was a hard sell," Arkin admits. "I was in a strategy meeting with the Army regarding future research initiatives and the topic of the military’s heavy investment in robotics was front and center. I brought up the subject of robot ethics and, to my surprise, I wasn’t laughed out of the room."

Arkin eventually got the go-ahead to proceed but received far less than for any of his previous DOD grants which, he suggests, "may reflect the level of enthusiasm the Defense Department has for this subject."

While the military’s main interest is in winning wars, says Arkin, he believes it is also concerned about the serious secondary consequences of non-compliance with international humanitarian laws. They include charges of war crimes, the effect on soliders’ morale, and concern about increasing hostilities among local populations.

"For those reasons, the military trains its human troops to adhere to the laws of war," he says. "And for those reasons, the military’s robotic hardware must do the same."

Arkin describes the results of his project as an ethical architecture that includes, among other things, an ethical governor and a responsibility advisor.

The former does a situation evaluation and, using a form of logic, determines whether there exists a human-generated obligation to engage a target–and whether there are any active prohibitions preventing the robot from proceeding. 

 The responsibility advisor includes both a negative override–"like a big red button that says ‘don’t shoot’ on it–and a positive override which would come into play if the robot says it can’t engage and the human operator insists it can," explains Arkin.

The responsibility advisor would inform the human operator why the robot is refusing to engage a target, and then would require two humans to sign off on the robot’s use of a weapon. A message would be sent back to military lawyers for after-action review "which hopefully would discourage soldiers from making illegal engagements," says Arkin.

But whether the DOD intends to utilize Arkin’s ethical architecture is anyone’s guess.

"You have to ask them," he says. "That’s the way it works; you never really know. You just finish the project, hand it in, and get little or no feedback. But the DOD keeps talking to me, so that may indicate they’re interested in continuing the research–as I am."

Here are four videos:

— operator interface for the ethical governor

— demo of the ethical responsibility advisor

— demo of the ethical governor

— incorporating guilt within an autonomous robot.

Paul Hyman is a science and technology writer based in Great Neck, NY.   

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More