North Carolina State University (NC State) researchers have demonstrated the first countermeasure for shielding artificial intelligence from differential power analysis attacks.
Such attacks involve hackers exploiting neural networks' power signatures to reverse-engineer the inner mechanisms of computer chips running those networks.
The attack relies on adversaries physically accessing devices in order to measure their power signature, or analyze output electromagnetic radiation. Attackers can repeatedly have the neural network run specific computational tasks with known input data, and eventually determine power patterns associated with the secret weight values.
The countermeasure is adapted from a masking technique; explains NC State's Aydin Aysu, "We use the secure multi-part computations and randomize all intermediate computations to mitigate the attack."
From IEEE Spectrum
View Full Article
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found