News
Artificial Intelligence and Machine Learning News

AI, Explain Yourself

It is increasingly important to understand how artificial intelligence comes to a decision.
Posted
  1. Introduction
  2. What Is an Explanation?
  3. Appropriate Trust
  4. Ensuring Human Control
  5. Author
AI Explain Yourself, question mark on mobile phone

Artificial Intelligence (AI) systems are taking over a vast array of tasks that previously depended on human expertise and judgment. Often, however, the “reasoning” behind their actions is unclear, and can produce surprising errors or reinforce biased processes. One way to address this issue is to make AI “explainable” to humans—for example, designers who can improve it or let users better know when to trust it. Although the best styles of explanation for different purposes are still being studied, they will profoundly shape how future AI is used.

Some explainable AI, or XAI, has long been familiar, as part of online recommender systems: book purchasers or movie viewers see suggestions for additional selections described as having certain similar attributes, or being chosen by similar users. The stakes are low, however, and occasional misfires are easily ignored, with or without these explanations.

Nonetheless, the choices made by these and other AI systems sometimes defy common sense, showing our faith in them is often an unjustified projection of our own thinking. “The implicit notion that AI somehow is another form of consciousness is very disturbing to me,” said Ben Shneiderman, a Distinguished University Professor in the department of computer science and founding director of the Human-Computer Interaction Laboratory at the University of Maryland.

As AI is applied more broadly, it will be critical to understand how it reaches its conclusions, sometimes for specific cases and sometimes as a general principle. At the individual level, designers have both ethical and legal responsibilities to provide such justification for decisions that could result in death, financial loss, or denial of parole. A reform of European Union data protection tools that took effect in May highlighted these responsibilities, although they refer only indirectly to a “right to explanation.” Still, any required explanations will not help much if they resemble the unread fine print of software end-user agreements. “It must be explainable to people,” Shneiderman said, including people who are not expert in AI.

For designers, providing explanations of surprising decisions need not be just an extra headache, but it “is going to be a very virtuous thing for AI,” Shneiderman stressed. “If you have an explainable algorithm, you’re more likely to have an effective one,” he asserted.

Explainable methods have not always performed better, though. For example, early AI comprised large sets of rules motivated by human decision criteria, and was therefore easy to understand within a restricted domain, but its capability was often disappointing. In contrast, recent dramatic performance improvements in AI are based on deep learning using huge neural networks with many hidden features that are “programmed” by exposure to huge numbers of examples. These systems apply vast computer power to these annotated training datasets to discern patterns that are often beyond what humans can recognize.

Back to Top

What Is an Explanation?

Considering this internal complexity of modern AI, it may seem unreasonable to hope for a human-scale explanation. For a deep learning system trained on thousands of pictures of cats and not-cats, “Maybe the best analogy is that it develops a gut instinct for what is a cat and what isn’t,” said Ernest Davis, a professor of computer science at New York University. Just as people devise post-hoc rationalizations for their own decisions, such as pointy ears, a tail, and so forth, “that doesn’t actually explain why you recognized it as a cat,” he said. “Generating that kind of account is a different task.”

An important challenge is that such independently generated explanations could also be chosen for their intuitive plausibility, rather than their accuracy. Presenting favorable stories will be particularly tempting when legal liability is at stake—for example, when a self-driving car kills a pedestrian, or an AI system participates in a medical mistake.

Liability assessment requires a detailed audit trail, Shneiderman said, analogous to the flight-data recorders that allow the U.S. National Transportation Safety Board to retroactively study airplane crashes. This kind of “explanation” allows a regulatory over-sight agency to analyze a failure, assign penalties, and require modifications to prevent a recurring failure. “My legal friends tell me that the law is perfectly fine,” Shneiderman said. “We don’t need new laws to deal with AI.”

Explaining individual incidents is hard enough, but in other cases problems may only emerge in a system’s aggregate performance. For example, AI programs used to assess borrowers’ creditworthiness or criminals’ recidivism based on socioeconomic attributes may end up discriminating against individuals whose racial cohort tends to have unfavorable characteristics. Similarly, systems analyzing medical records “might pick up something that looks like race as an important indicator for some outcome,” when actually patients of different races just end up in hospitals that use different procedures, said Finale Doshi-Velez, an assistant professor of computer science in the John A. Paulson School of Engineering and Applied Sciences at Harvard University.

Many end users, however, seek less legalistic explanations that may not be provably connected to the underlying program. Like AI, “People are incredibly complicated in terms of how we think and make decisions,” said Doshi-Velez, but “we are able to explain things to each other.”


Considering the internal complexity of modern AI, it may seem unreasonable to hope for a human-scale explanation of its decision-making rationale.


In medical use, for example, it can be enough to have an explanation that clarifies the diagnostic or therapeutic decision for a subset of patients with similar conditions, Doshi-Velez said. Such a “local” explanation need not address all the complexities and outliers covered by the full-blown deep learning system.

“Depending on your application, you might think of different formats of explanation,” agreed Regina Barzilay, Delta Electronics Professor in the department of electrical engineering and computer science at the Massachusetts Institute of Technology. At one level, for example, the system can explain by “identifying excerpts from the input which drove the decision,” as her group is doing for molecular modeling. Another technique is “to find which instances in the training set are the closest” to the target.

Back to Top

Appropriate Trust

In view of AI’s growing military importance, the U.S. Defense Advanced Research Projects Agency (DARPA) in 2017 rolled out an ambitious program to explore XAI from many different perspectives and compare them. “One of the main goals or benefits of the explanation would be appropriate trust,” stressed David Gunning, the program’s manager. “What you really need is for people to have a more fine-tuned model of what the system is doing so they know the cases when they can trust it and when they shouldn’t trust it.”

Most of the dozen projects aim to incorporate explanation-friendly features into deep learning systems; for example, preprogramming the internal network structure to favor familiar concepts.

A critical issue is whether explainable features degrade the performance of the AI. “I think there is inherent trade-off,” Gunning said, although he notes that some participants disagree. Barzilay, in contrast, says that experiments so far indicate any performance hit from making an AI explainable is “really, really minimal.”

As an alternative to modifying deep learning, one of the DARPA projects replaces it with an approach inherently easier to interpret. The challenge in that case is to make its performance more competitive, Gunning said.

A third strategy is to use a separate system to describe the learning system, which is treated as a black box, essentially using one learning system to analyze another. For this scheme, one question is whether the explanation accurately describes the original system.


“It’s time for AI to move out its adolescent, game-playing phase and take seriously the notions of quality and reliability.”


As the results come in, beginning in fall 2018, “the program should produce a portfolio of techniques,” Gunning said. An important feature of the program is a formal evaluation, in which the usefulness to human users of results with or without explanation will be compared. Some of this assessment will based on subjective impression, but users will also try to predict, for example, whether the system will correctly execute a new task.

The goal, Gunning said, is to determine whether “the explanation gives them a better idea of the system’s strengths and weaknesses.”

Back to Top

Ensuring Human Control

Ultimately, explanations must be understandable by humans who are not AI experts. The challenges of doing this and measuring the results are familiar to educators worldwide, and successful approaches must include not only computer science, but also psychology.

“This human-computer interaction is becoming more and more important,” for example for medical AI systems, said Andreas Holzinger, lead of the Holzinger Group at the Institute for Medical Informatics/Statistics of the Medical University of Graz, Austria, as well as an associate professor of applied computer science at the Graz University of Technology. “The most pressing question is what is interesting and what is relevant” to make the explanation useful in diagnosis and treatment. “We want to augment human intelligence,” Holzinger said. “Let the human do what the human can do well, and so for the computer.”

For scientific systems, users “are thinking about mechanistic explanations,” Barzilay said. “The potential is to have a symbiosis between machines and humans. If these patterns are provided to humans, can they really do better science?” she said. “I think this will be the next frontier.”

Instead of teamwork between AI and humans, however, Shneiderman regards the more appropriate goal as leveraging human decision making, rather than outsourcing it. “The key word is responsibility,” he said. “When we’re doing medical, or legal, or parole, or loans, or hiring, or firing, or so on, these are consequential.

“It’s time for AI to move out of its adolescent, game-playing phase and take seriously the notions of quality and reliability,” says Shneiderman.

*  Further Reading

Statement on Algorithmic Transparency and Accountability, Association for Computing Machinery US Public Policy Council, Jan. 12, 2017, https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf.

European Commission 2018 reform of EU data protection rules https://ec.europa.eu/commission/priorities/justice-and-fundamental-rights/data-protection/2018-reform-eu-data-protection-rules_en

Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O’Brien, D., Schieber S., Walso, J., Weinberg, D., and Wood, A.
Accountability of AI Under the Law: The Role of Explanation, https://arxiv.org/abs/1711.01134

Fairness, Accountability, and Transparency in Machine Learning Group https://www.fatml.org/

Explainable Artificial Intelligence (XAI Project) U.S. Defense Advanced Research Projects Agency https://www.darpa.mil/program/explainable-artificial-intelligence

DARPA Perspective on AI U.S. Defense Advanced Research Projects Agency https://www.darpa.mil/about-us/darpa-perspective-on-ai

Shneiderman, B.
Algorithmic Accountability: Designing for Safety, Radcliffe Institute for Advanced Study, Harvard University https://www.radcliffe.harvard.edu/video/algorithmic-accountability-designing-safety-ben-shneiderman

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More