Sign In

Communications of the ACM

ACM TechNews

Sneaky Attacks Trick AIs Into Seeing or Hearing What's Not There


Information screens in an autonomous vehicle.

Facebook researchers have developed a new technique that could be used by hackers to trick autonomous cars.

Credit: Getty Images

Facebook researchers have developed Houdini, a new technique or adversarial example that could be used by hackers to trick autonomous cars into ignoring stop signs or prevent surveillance cameras from spotting a suspect.

Houdini can be used to fool both voice-recognition and machine-vision systems by adding small amounts of digital noise to images and sounds that humans would not notice. Hackers could deceive such systems by determining what an algorithm is seeing or hearing when faced with a similar situation.

The Facebook researchers inserted a small amount of digital noise into a voice recording of a person speaking a phrase and played that recording to a speech-recording app, which thought it was hearing a completely different sentence than the one that was actually spoken.

The University of Illinois at Urbana-Champaign's David Forsyth says the research adds to the ongoing mystery of why algorithms are so responsive to minute changes that humans would never notice.

From New Scientist
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found