News
Architecture and Hardware

Making Empathy Artificial

Scientists strive to add the ability to understand what others are feeling to artificial intelligence.

Posted
Credit: Getty Images heart held by a robotic hand, illustration

It is difficult enough for humans to read each other’s emotions and display the appropriate level of empathy. Asking a machine to think, feel, and act like a person pushes the boundaries of today’s artificial intelligence (AI).

Yet, in order to design better service robots, chatbots, and generative AI systems, it is critical to imbue computing devices with a sense of empathy, the capacity to understand what another person is experiencing. While current artificial agents can simulate basic forms of compassion, they are simply responding to keywords or other basic cues and spitting out coded responses.

To Anthony G. Vaccaro, a postdoctoral research associate at the University of Southern California NeuroEndocrinology of Social Ties (NEST) Lab, “Today’s systems are lacking. There is a long way to go to reach the point where AI can detect feelings and function at a level that consistently delivers the appropriate empathy and understanding.”

For now, “Machine empathy remains a highly abstract problem,” says Joanna Karolina Malinowska, an associate philosophy professor at Adam Mickiewicz University in Poland and a member of the university’s Cognitive Neuroscience Center. “Developing systems that understand complex human behavioral cues, including ambivalence and uncertainty, is extraordinarily challenging.”

No Secondhand Emotions

Click to an online chatbot or interact with a service robot and odds are you will likely encounter a system that incorporates some form of empathy; it is fundamental to gaining trust with, and buy-in from, humans. Yet, while the words streaming from the device may at times sound comforting, these one-approach-fits-all systems can also seem contrived and insincere.

The problem is rooted in a basic issue: empathy is incredibly complex and intricate. “There are a lot of factors that go into empathy and how people perceive empathy, including what they prefer and how they respond in different situations,” Vaccaro points out. Alas, today’s AI cannot adapt and adjust to individual traits—such as a preference for humor versus affirmations stemming from the same event.

Attempting to replicate the entire spectrum of human thinking and feelings is difficult because while all humans are different, robots must operate with a restricted set of data that is hard-coded into the system, Malinowska says. Recognizing and interpreting words in different contexts is tough enough, but these systems also must grapple with facial expressions and body language. “Even the best systems today frequently err. This isn’t surprising, because humans also frequently misinterpret others’ emotions,” she says.

Not surprisingly, the line between convincing and manipulative behavior in AI can be remarkably thin. Some features arouse human sympathy and may strengthen a person’s tendency to empathize with robots, while others can accentuate the artificial nature of an interaction, potentially leading to irritation, fear, frustration, embarrassment, or even aggression in the user,” Malinowska says.

An underlying issue is that humans tend to anthropomorphize robotic systems, Malinowska notes. Sometimes that’s a good thing, other times it’s a problem. As a result, matching robot behavior to a specific situation or use case can prove remarkably difficult. For example, a support robot in a hospital and a bomb-squad robot interact with humans in very different ways—and users will likely have very different reactions to identify.

These robots should also evoke various emotions in their users—care bots can, and even in some situations should, arouse empathy, while in the case of military robots this is typically undesirable. “In order to avoid design errors and achieve the desired outcomes, we must understand the underlying factors driving this process,” Malinowska explains.

The goal, then, is to develop better algorithms, but also multimodal systems that not only respond to words or text, but also incorporate machine vision, speech and tonal recognition, and other cues rooted in human biology. “Better artificial empathy revolves around more accurately mirroring the motivations behind human behavior,” Vaccaro says.

Moving Beyond Reason

University researchers and companies are forging ahead with emotional support robots, service robots, and other tools for children, adults with dementia, and others. These include firms like Groove X, which developed a $2,825 mobile robot with 50 sensors that aims to be a home companion, and Moxie, a social robot that aids children with social, emotional, and developmental challenges.

Still, the path to more realistic empathic behavior in robots remains long and bumpy. Although much of the emphasis is on building better language models, there also is a growing focus on developing multimodal AI that can act more like the five senses humans possess. For now, filling in all the blanks—including the nuances of language and body language across languages and cultures—remains somewhat of a “black box,” Malinowska says.

Vaccaro believes traditional methods of hard-coding behavior into computing systems have too many limitations, particularly in squishy areas like empathy. For instance, if a robot views a video of a person experiencing pain after falling and incorporates this into its learning, it might mimic the reaction to connect with a person. Yet, such a response will likely seem comical or absurd, because a human can deduce that there’s no real empathy involved.

Instead, Vaccaro’s research team has explored the idea of using machine learning to help systems get some “feeling” for what an actual person is going through, with an emphasis on vulnerability. While, for now, it’s impossible to actually have a robot feel the pain of a fall, the device can at least begin to decipher what is taking place and how a human interprets the feelings. “It’s a way to get AI systems to become objectively smarter and reduce the risk of a system behaving in an unacceptable or antisocial way,” Vaccaro notes.

Lacking Words

Yet another challenge is addressing an over-dependence on emotionally supportive systems. For instance, a robot that provides support for dementia patients could offer companionship as well as cognitive stimulation, says Laurel Riek, a professor of computer science and engineering at the University of California, San Diego and director of the university’s Healthcare Robotics Lab.

However, “People can become very attached to robots, particularly when they provide social or therapeutic support,” Riek explains. The same robot could exacerbate a sense of social isolation and introduce safety and autonomy risks. Riek promotes the concept of offramps for humans. “All interactions with a robot will eventually end, so it’s important to ensure users are part of the process of creating an exit plan.”

Other concerns exist. Riek says these include bad actors tapping empathy-based approaches to deceive and manipulate humans, the displacement of healthcare workers and others, and data privacy concerns, particularly when robotic devices are used to forge human connections in healthcare applications.

Nevertheless, the concept of artificial empathy in machines undoubtedly will advance. Concludes Malinowska: “It will take a long time before robotic systems and AI are on parity with humans in regard to artificial empathy. But we are continuing to gain understanding and make progress.”

Samuel Greengard is an author and journalist based in West Linn, OR, USA.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More