Deep learning neural network systems for analyzing medical images can be exploited by cyberattackers in ways that humans cannot detect, according to a new study.
Harvard Medical School's Samuel Finlayson warns this relatively simple adversarial attack method could be easily automated.
His team tested deep learning systems with adversarial examples on three common imaging tasks: classifying diabetic retinopathy from retinal images, identifying pneumothorax from chest x-rays, and finding melanoma in skin photos.
The exploits involve altering pixels within images so people would mistake them for noise, when in fact they are deceiving the software into misclassifying the images, potentially up to 100% of the time.
"We feel that adversarial attacks are particularly pernicious and subtle, because it would be very difficult to detect that the attack has occurred," Finlayson notes.
From IEEE Spectrum
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found