Researchers at Harvard University say they have developed noise-robust classifiers that are prepared against the worst case of added additional data that disrupts or skews information the algorithm has already learned, known as noise.
The team notes these algorithms have a guaranteed performance across a range of different example cases of noise and perform well in practice.
The researchers want to use this new technology to help protect deep neural networks, which are vital for computer vision, speech recognition, and robotics, from cyberattacks.
"Since people started to get really enthusiastic about the possibilities of deep learning, there has been a race to the bottom to find ways to fool the machine-learning algorithms," says Harvard professor Yaron Singer.
He notes the most effective way to fool a machine-learning algorithm is to introduce specifically tailored noise for whatever classifier is in use, and this "adversarial noise" could wreak havoc on systems that rely on neural networks.
From Harvard University
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found