acm-header
Sign In

Communications of the ACM

ACM TechNews

Using Adversarial Attacks to Refine Molecular Energy Predictions


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
An image of a molecular structure.

Adversarial attacks can improve the reliability of neural networks (NN) in predicting molecular energies by quantifying their uncertainty, according to a new report by Massachusetts Institute of Technology (MIT) researchers. The team used adversarial attacks to sample molecular geometries in a potential energy surface (PES) and tapped multiple NNs to forecast the PES from the same data.

"We aspire to have a model that is perfect in the regions we care about [i.e., the ones that the simulation will visit] without having had to run the full ML [machine learning] simulation, by making sure that we make it very good in high-likelihood regions where it isn't," said MIT's Rafael Gomez-Bombarelli.

From MIT News
View Full Article

 

Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account