Sign In

Communications of the ACM

ACM TechNews

Stupid AI: How Humans Can Stop Machines From Falling for Visual Tricks

View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Artificial intelligence can easily miss a sign, if someone has tampered with the image.

New research from Johns Hopkins University suggests ways to prevent artificial intelligence from being visually deceived.

Credit: Getty Images

Research from Johns Hopkins University (JHU) suggests new ways to prevent artificial intelligence (AI) from being visually deceived.

Experiments by JHU’s Zhenglong Zhou and Chaz Firestone involved showing people a broad range of adversarial images and having them select which object AIs would wrongly claim to identify from a list of up to 48 options. Across six different image types, 81% to 98% of participants picked the correct wrong image at above-chance rates.

Auburn University's Anh Nguyen says this "suggests that humans are able to decipher these images in the same way as the poor victim machines do."

Nguyen believes people could help computers better cope with adversarial images better by training AIs to emulate human visual perception and use this model as a defense mechanism, sifting out anything that does not conform with what the human model sees.

From New Scientist
View Full Article - May Require Paid Subscription


Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
ACM Resources