Sign In

Communications of the ACM

ACM TechNews

The Dark Secret at the Heart of AI


Pondering the mysteries inherent in artificial intelligence.

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Credit: Keith Rankin

Some experts warn deep-learning artificial intelligence (AI) technologies cannot be adopted without reservations if their creators cannot understand how they reason, or guarantee accountability for users.

"You don't want to just rely on a 'black box' method," says Massachusetts Institute of Technology (MIT) professor Tommi Jaakkola.

As AI technology progresses, the possibility looms of taking a leap of faith in using it. Some researchers are attempting to introduce "explainability" to AI to instill trust.

MIT professor Regina Barzilay thinks human-machine collaboration will go a long way toward implementing explainability, and one project in this area seeks to develop a deep-learning algorithm that can detect early signs of breast cancer in mammograms. However, this strategy cannot avoid the fact that explanations are simplified, meaning some information could be lost along the way.

University of Wyoming professor Jeff Clune speculates some aspects of machine intelligence will always be instinctual or inscrutable.

From Technology Review
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found

Read CACM in a free mobile app!
Access the latest issue, plus archived issues and more
ACM Logo
  • ACM CACM apps available for iPad, iPhone and iPod Touch, and Android platforms
  • ACM Digital Library apps available for iOS, Android, and Windows devices
  • Download an app and sign in to it with your ACM Web Account
Find the app for your mobile device
ACM DL Logo