Sign In

Communications of the ACM

ACM TechNews

Improving Security as AI Moves to Smartphones


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
The addition of a mathematical constraint during quantization makes it less likely an AI will be exploited by a slightly modified image and misclassify what it sees.

Researchers suggest the vulnerability of compressed, or quantized, artificial intelligence models to adversarial attack could be remedied by adding a mathematical constraint

Credit: Ji Lin

Massachusetts Institute of Technology (MIT) and IBM researchers have demonstrated the vulnerability of compressed, or quantized, artificial intelligence (AI) models to adversarial attack.

They suggest this could be remedied by adding a mathematical constraint during quantization, to reduce the odds that an AI will be exploited by a slightly modified image and misclassify what it sees.

Deep learning models quantized from 32 bits to 8 bits or fewer are more susceptible to adversarial attacks, slashing their accuracy from between 30% and 40% to less than 10%.

Adding the constraint improved performance gains in an attack, with smaller models in certain conditions outperforming the 32-bit model.

Said MIT's Song Han, "Our technique limits error amplification and can even make compressed deep learning models more robust than full-precision models."

From MIT News
View Full Article

 

Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA


 

No entries found