Researchers at New York University (NYU) have demonstrated a cyberattack against artificial intelligence (AI) that controls driverless cars and image-recognition systems by installing an invisible backdoor in the software.
The team says AI from cloud providers could be infected with these backdoors, and would function normally until a predetermined trigger causes the software to mistake one object for another.
The NYU method instructs a neural network to identify the trigger with a stronger confidence than what the neural network is supposed to be perceiving, thus preempting correct signals in favor of incorrect ones.
The complexity of the network is such that there is currently no test for this form of tampering.
The researchers note this hack could make cloud customers more suspicious of the training protocols on which their AIs rely.
"Outsourcing work to someone else can save time and money, but if that person isn't trustworthy it can introduce new security risks," says NYU professor Brendan Dolan-Gavitt.
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found