News
Artificial Intelligence and Machine Learning

The Hunt for Explainable AI

Posted
HAL 9000, the fictional sentient computer from the movie 2001: A Space Odyssey, could explain some of his reasoning.
The notion that we should understand how artificial intelligences make decisions is gaining increasing currency.

As we face a future in which important decisions affecting the course of our lives may be made by artificial intelligence (AI), the idea that we should understand how AIs make decisions is gaining increasing currency.

Which hill to position a 20-year-old soldier on, who gets (or does not get) a home mortgage, which treatment a cancer patient receives … such decisions, and many more, already are being made based on an often unverifiable technology.

"The problem is that not all AI approaches are created equal," says Jeff Nicholson, a vice president at Pega Systems Inc., makers of AI-based Customer Relationship Management (CRM) software.  "Certain 'black box' approaches to AI are opaque and simply cannot be explained."

This appears to be especially true of deep learning AI, wherein researchers arm a computer with a set of algorithms, assign the system a goal, and essentially turn it loose to come up with a solution.

The results are often breathtaking, eye-opening—and impossible to confirm.

For people like Ben Shneiderman, a professor of computer science at the University of Maryland Institute for Advanced Computer Studies, placing blind trust in AI software that make life-altering decisions is simply unacceptable. He and others have launched a drive to hold AI more accountable for its decisions, and to force the companies behind AI to pull back the curtain on their thinking and decision-making.

"The move to Explainable AI is important and very positive," Shneiderman says. "The goal is more than 'human-in-the-loop.'  I think it should be 'human-owns-the-loop', which emphasizes the human responsibility for algorithm/machine/AI actions."

For many months now, key proponents of Explainable AI have hung their hopes on what looked like a major victory: a European Union (EU) regulation under development that promises to reign in unverifiable AI. Essentially, the new law—the General Data Protection Regulation (GDPR), slated to be implemented May 25—requires EU organizations and businesses provide clear explanations on how their AI made decisions.

According to Sandra Wachter, a research fellow at the University of Oxford who has studied the regulation's evolution, the GDPR is without fangs. "All our research led me to conclude that the GDPR is likely to only grant a 'right to be informed' about the existence of automated decision-making," Wachter says.

In practice, that means EU organizations and companies will only be required to tell people that AI played a part in the decision to deny parole to a prisoner, hire a promising staffer, approve surgery for a patient, and other life-changing decisions. As for actually understanding how those decisions are made, the EU's response to its citizenry is essentially, "Hey, good luck with that."

That is discouraging to Explainable AI proponents, but they are pressing on to find other ways to ensure AI makes sense to everyday people.

Pega's CRM software, for example, is designed to enable companies to forgo the use of inscrutable AI tools included in the package when those decisions need to be easily understood by the humans they impact. In such cases, the software only utilizes conventional tools to make decisions. "For example, the system can present a very visual 'spider graph' that depicts the top predictors of that model and factors leading to the decision in question," Nicholson says.  "This provides a digestible means for humans to begin to understand the decision."

Meanwhile, Google says it is investing significant resources (but does not disclose what kind or how much) in its People + AI Research Initiative (PAIR), whose website explains its mission is to advance to build trust in AI by "the research and design of people-centric AI systems. We're interested in the full spectrum of human interaction with machine intelligence, from supporting engineers to understanding everyday experiences with AI."

Oxford's Wachter is calling on organizations to explain how AI decisions are made, with the explanations framed to meet three goals:

  1. to inform and help the individual understand why a particular decision was reached,
  2. to provide grounds to contest the decision if the outcome is undesired, and
  3. to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model.

"The method allows us to identify changes to variables that would result in a different decision, which can be shared with the affected party," Wachter says. "So we're providing a way not just to know what some of the key variables are and what needs to change in order for a preferred decision to be made, but also to assess whether the underlying data was accurate and the key variables were reasonable/non-invasive."

The U.S. Defense Advanced Research Projects Agency (DARPA)—the agency overseeing  development of AI that someday could send soldiers into battle—is also very serious about moving the ball forward.

Squired by David Gunning, Program Manager of DARPA's Information Innovation Office, the Explainable Artificial Intelligence (XAI) initiative funds researchers at Oregon State University, Carnegie Mellon University, the Georgia Institute of Technology, Stanford University, and other academic institutions to come up with AI that will make easy sense to soldiers in combat.

"Ultimately, we want these explanations to be very natural, translating these deep network decisions into sentences and visualizations," says Alan Fern, a computer science professor at Oregon State who is leading DARPA-funded Explainable AI research there. "Nobody is going to use these emerging technologies for critical applications until we are able to build some level of trust."

Adds Oxford's Wachter:  "We need to stop only calling for transparency, fairness, and accountability and actually start defining these terms and focus on solutions."

Joe Dysart is an Internet speaker and business consultant based in Manhattan, NY, USA. 

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More