News
Artificial Intelligence and Machine Learning

In Black Box Algorithms We Trust (or Do We?)

Posted
A black box is a device, system, or object that can be viewed in terms of its inputs and outputs, without any knowledge of its internal workings.
To what extent is the black-box character of machine learning a problem, and what can be done to make machine learning more transparent and understandable?

Machine learning algorithms are leaving academic laboratories and entering real-world applications in medicine, finance, robotics, self-driving cars, public services, and personal assistants. Unfortunately, many of these algorithms cannot explain their results even to their programmers, let alone to end-users. They operate like black boxes (devices that can be viewed in terms of their inputs and outputs, without any knowledge of their internal workings).

At least, they do until they run into problems that result in negative events like the 2010 Flash Crash, Google Photo's mislabeling of an image of a Black couple as gorillas, and Amazon bookseller bots that bid against each other until a book's price exceeded $23 million.

To what extent is the black-box character of machine learning a problem, and what can be done to make machine learning more transparent and understandable? That was the topic of a conference session at the annual meeting of the American Association for the Advancement of Science (AAAS) recently in Boston.

Rich Caruana, a senior researcher at Microsoft Research, Redmond, WA, develops machine learning algorithms for critical applications in health care. Based on his more than 20 years of experience in the field, he says, "Many people do not realize that the problem is often in the data, as opposed to what machine learning does with the data. It depends on what you are doing with the model whether the data are used in the right or in the wrong way."

Caruana gives the example of a pneumonia risk prediction model on which he had worked. The purpose of the model was to evaluate whether a patient with pneumonia was at high or low risk, to help decide whether or not the patient should be admitted to the hospital. "On the basis of the patient data," says Caruana, "the model had found that patients with a history of asthma have a lower risk of dying from pneumonia. In reality, everybody knows that asthma is a very high risk factor for pneumonia. What the model found is the result of the fact that asthma patients get healthcare faster, which lowers their chance of dying compared to the general population."

Most datasets have such 'landmines' in them, as Caruana calls them. "We have to understand the model before we deploy it in the real world. Otherwise, the model is going to hurt patients or drivers, or discriminate on the basis of gender or race."

According to Hanna Wallach of Microsoft Research New York City and the University of Massachusetts, Amherst, we are just at the beginning of making machine learning algorithms transparent. "What bothers me the most is that we do not have a clear picture of what transparency means; transparent to whom, and in what fashion?"

She notes that when data points represent humans, error analysis takes on a greater level of importance because errors have real-world consequences that involve people's lives. "It's not enough for a model to be 95% accurate. We need to know who's affected when there's a mistake, and in what way. For example, there's a big difference between a model that's 95% accurate because of noise and one that's 95% accurate because it performs perfectly for white men, but achieves only 50% accuracy when making predictions about women and minorities. Statistical patterns that hold for the majority may be invalid for a minority group."

Anders Sandberg of the Future of Humanity Institute at the University of Oxford, U.K., stresses that there is no "one size fits all" transparency. "As the consumer of a gadget, transparency for me means that the gadget needs to work in my life and for my purposes. For the programmer of the software, it means understanding why certain inputs lead to certain outputs. And for a lawyer, it means understanding something about liability, responsibility, and criminality."

So, what must be done to make machine learning algorithms more transparent?

What about developing a set of guiding principles to evaluate whether a machine learning algorithm can be trusted?

"You can definitely come up with a sort of checklist, but at the moment we are still a bit away from it," says Caruana. "The principles will also be very domain-dependent. Most important right now is to make people aware of such problems in the first place."

Wallach agrees: "I certainly think it is worth trying to come up with a set of principles, but it will be very difficult to preemptively cover all possible situations encountered by an algorithm in the real world."

According to Caruana, one of these principles is to leave as many variables as possible in the model so you can study what the model does with them. "Afterwards you might remove them, but if you remove them too early the model can learn bad things like 'asthma patients have a lower risk of dying from pneumonia' from the correlations among variables, even though the asthma variable has been removed."

Sandberg adds as a general principle that the dataset must be diverse and represent the population as a whole.

Despite the pitfalls of machine learning algorithms, Sandberg points out there are many positive stories that show the industry quickly reacts to sometimes-embarrassing outcomes. "Take the example in which a Google-algorithm categorized a photo of a black couple as 'gorillas.' Google rapidly responded to fix the problem. Everybody in machine learning and data science has to learn such lessons."

Caruana's guess is that ultimately we will have a higher standard for machine learning algorithms than for humans. "Think about what automation has done to flying. It is so incredibly safe as compared to everything else humans do."

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More