News
Artificial Intelligence and Machine Learning

The Dark Side of AI

Posted
Can we trust Artificial Intelligence?
Artificial intelligence has evolved to the point where machine learning technologies are being combined with natural language processing to imitate how a human interacts and thinks in a number of applications. But we still don't know the specific steps it

As far back as 1950, Alan Turing posed the question "Can machines think?" Turing, a British computer scientist and mathematician, is believed to be one of the first to talk about machines having a cognitive aspect and an ability to make decisions.

Today, of course, artificial intelligence (AI) has evolved to the point where machine learning technologies are being combined with natural language processing to imitate how a human interacts and thinks in a number of applications. Companies have started jumping on the AI bandwagon; Nvidia, for one, announced recently that it will train 100,000 developers on AI this year, calling it "the defining technology of our generation."

As AI is used increasingly in a growing range of applications (consider the advent of self-driving cars, for one), industry experts say it has a dark side: the systems can, and will, fail. "We've never before built machines that operate in ways their creators don't understand,'' writes Will Knight in the April 2017 MIT Technology Review article, "The Dark Secret at the Heart of AI." "How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable?"

The answer is we simply don't know, and experts believe it will be some time before we can rely solely on AI in mission-critical scenarios. "The only certainty we can control with AI is whether to use it or not,'' says Sue Feldman, president of consultancy Synthexis, and managing director of the Cognitive Computing Consortium, an organization whose goal is to provide a forum for researchers, developers and practitioners of cognitive computing and its allied technologies. "And of course, there is no way, really, to put the genie back in the bottle. Control is not a possibility."

Among the implications we need to be aware of is the fact that "software is buggy," says Feldman. Another is that algorithms are crafted by humans who have certain biases about how systems and the world in general work, which may not match someone else's biases. There is no unbiased data set, she notes, "and we use data sets with built-in biases or with holes in the data to train AI systems." Consequently, "These systems are by their very nature, then, biased, or lacking in information. If we depend on these systems to be perfect, we are letting ourselves in for errors, mistakes, and even disasters."

AI systems will never be able to predict something that hasn't already happened, adds Anne Moxie, a senior analyst at Boston-based research and advisory firm Nucleus Research. "They're learning from past events but won't be able to predict something that's unexpected,'' she says. "They can't think the way humans can and speculate on something they don't have any data for."

AI systems are good for scanning x-ray films, for example, and identifying patterns and anomalies doctors should pay attention to, she says. However, when it comes to having AIs make real decisions, "I think it will be a really long time before anyone trusts a machine to do that,'' Moxie says. "It's not that it wouldn't be able to make the decision; it's whether humans would allow that. I'd personally prefer a doctor do that."

It's important to understand what AI systems are good for and how they are limited, so we don't place too much trust on what they can do, stresses Tommi Jaakkola, Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory. Echoing Moxie, he says, "The systems today are well-suited for narrower tasks like medical image processing; we cannot weed through the same amount of data as the systems can, and pick out those possibly subtle correlations with outcomes across diverse sources of data. The systems can also reason many steps ahead, as in games, and this horizon is rapidly increasing, and expanding in scope."

Jaakkola adds that we are "nowhere near" having systems replace humans in various aspects, or in them reaching common-sense reasoning in realistic open-ended scenarios.

For the time being, the human connection remains a key component to making AI systems effective. The use of such systems should be well-tested and strike a balance between reputable data and human decision-making, says Feldman, because "humans balance AI systems and [they] plug each other's blind spots."  

AI systems can make valuable connections and discover patterns we would not think to look for, and humans can decide whether to use that information, she notes; "a perfect partnership in which one of the partners won't be insulted if their input is ignored."

Like anything, AI is a progression, says Jaakkola. Before we can truly trust these systems, "There needs to be transparency in what their capabilities are and their limitations, and defining the confidence we can place on their predictions. And people are working to quantify that confidence."

Esther Shein is a freelance technology and business writer based in the Boston area.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More