Sign In

Communications of the ACM

ACM TechNews

AI Could Make Cyberattacks More Dangerous, Harder to Detect


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Tracing cyberattacks.

Researchers say hackers could weaponize artificial intelligence to conceal and accelerate cyberattacks, and potentially escalate their damage.

Credit: medium.com

Scientists warn that hackers could weaponize artificial intelligence (AI) to conceal and accelerate cyberattacks and potentially escalate their damage.

IBM researchers last month demonstrated "DeepLocker" AI-powered malware designed to hide its damaging payload until it reaches a specific victim, identifying its target with indicators like facial- and voice-recognition and geolocation. IBM's Marc Stoecklin said with DeepLocker, "AI becomes the decision maker to determine when to unlock the malicious behavior."

Meanwhile, the Stevens Institute of Technology's Giuseppe Ateniese has investigated the use of generative adversarial networks (GANs), which contain two neural networks that collaborate to deceive safeguards like passwords; he designed a GAN that fed leaked passwords found online into an AI model, to analyze patterns and narrow down likely passwords faster than brute-force attacks.

Said Ateniese, "We need to study how AI can be used in attacks, or we won't be ready for them."

From The Wall Street Journal
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account