Sign In

Communications of the ACM

ACM News

The Pentagon Inches Toward Letting AI Control Weapons


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Part of a swarm of drones.

A report this month from the National Security Commission on Artificial Intelligence, an advisory group created by Congress, recommended, among other things, that the U.S. resist calls for an international ban on the development of autonomous weapons.

Credit: Getty Images

Last August, several dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.

So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.

The mission was just an exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon; the robots were armed with nothing more lethal than radio transmitters designed to simulate interactions with both friendly and enemy robots.

The drill was one of several conducted last summer to test how artificial intelligence could help expand the use of automation in military systems, including in scenarios that are too complex and fast-moving for humans to make every critical decision. The demonstrations also reflect a subtle shift in the Pentagon's thinking about autonomous weapons, as it becomes clearer that machines can outperform humans at parsing complex situations or operating at high speed.

From Wired
View Full Article

 


 

No entries found