Sign In

Communications of the ACM

ACM TechNews

Researcher: 'we Should Be Worried' This Computer Thought a Turtle Was a Gun


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
We see turtles; an artificial intelligence would see guns.

A research team at the Massachusetts Institute of Technology has demonstrated the first example of a real-world object becoming "adversarial" at any angle.

Credit: LabSix/MIT

The Massachusetts Institute of Technology's LabSix artificial intelligence (AI) research team has demonstrated the first example of a real-world three-dimensional (3D) object becoming "adversarial" at any angle.

Deceiving an AI with a few pixels is known as an adversarial example, and the LabSix researchers successfully tricked an AI into believing a real-world object--in this case, a 3D printed turtle--was a firearm.

The group spent six weeks developing an algorithm that confuses a neural network no matter how the AI looks at it, and with a few small changes to the turtle's coloring, they made the computer think it was looking at a rifle regardless of the viewing angle.

"More and more real-world systems are going to start using these technologies," notes LabSix member Anish Athalye. "We need to understand what's going on with them, understand their failure modes, and make them robust against any kinds of attack."

From New Scientist
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found