Sign In

Communications of the ACM

ACM TechNews

Researchers Develop 'Vaccine' Against Attacks on Machine Learning


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Vaccination.

Researchers at the Commonwealth Scientific and Industrial Research Organization's Data61 group in Australia have developed techniques to "vaccinate" algorithms against adversarial attacks.

Credit: Science Photo Library

Researchers at the Commonwealth Scientific and Industrial Research Organization's (CSIRO) Data61 group in Australia have developed techniques to "vaccinate" algorithms against adversarial attacks.

Cyberattackers often try to fool machine learning models by adding a layer of noise to an image, in an attempt to deceive the models into misclassifying the image.

The CSIRO researchers implemented a weak version of such an adversary—like small modifications or distortions of a collection of images—to create a more "difficult" training dataset so the resulting model more easily withstands adversarial attacks.

Said Data61's Adrian Turner, "The new techniques … will spark a new line of machine learning research and ensure the positive use of transformative [artificial intelligence] technologies."

From CSIRO (Australia)
View Full Article

 

Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA


 

No entries found