On screen, the virtual character sits in a comfortable purple chair. She wears plain pants, a turquoise shirt, and a slim jacket with the sleeves rolled up past her elbows. Her short dark hair is swept to one side and her ethnicity is intentionally ambiguous, according to her developers, a team of researchers with the University of Southern California (USC) Institute for Creative Technologies. Some of the people who have interacted with her assume she is Asian; others conclude she has a completely different ethnicity. “People have come up and said that they’re so thankful we paired them with someone of their race because it helped them connect,” recalls Gale Lucas, a research assistant professor at USC.
The platform, SimSensei, is designed for one-on-one sessions with individuals, and uses visual and audio feedback to tailor its responses. In one study, veterans who submitted to counseling sessions with SimSensei shared personal and mental health concerns they would have withheld from actual human therapists. The system is designed to encourage this kind of open interaction, engaging in active listening by offering affirming or comforting responses or noting when the subject pauses or hesitates—and asking why. Human therapists carry out these techniques intuitively, yet Lucas and her colleagues found the participants were still more open with the virtual platform.
Northeastern University computer scientist Timothy Bickmore and his team have found similarly surprising connections between people and virtual agents across numerous studies. Typically, Bickmore and his team will try to simulate a counseling or information-sharing session between a patient and a healthcare professional, then measure the effectiveness of virtual agents against their human counter-parts. “We try to simulate face-to-face counseling,” Bickmore explains. “We have found over the years that many disadvantaged groups prefer and do much better with agents and robots compared to those with high-level tech literacy. They can get the information better. The people don’t feel they’re being talked down to.”
As robots become an increasingly present and powerful force in our lives, from healthcare to home maintenance to the workplace, researchers are hard at work exploring different ways to strengthen the bonds between people and both virtual and physical agents. Some of the lessons learned in developing virtual platforms apply to embodied robots; in other cases, the rules appear to be different. What has become clear, researchers say, is that there is no simple recipe for developing likable robots.
The Cult of Personality
Before Bickmore and his team develop a new virtual agent for a specific experiment, they first study the nurses, general practitioners, or other professionals who typically conduct the session they are trying to simulate, then base their agents on these individuals. All of the agent’s responses are approved by health professionals in advance. Natural language exchanges would be too risky in a healthcare environment, according to Bickmore. Instead, participants in the studies typically interact with the virtual agent through an iPad or some other large touchscreen device, and the conversation follows a narrative tree. The virtual agent asks a question or delivers a spoken prompt. The participant selects a response from a multiple-choice list. The conversation moves on.
Figure. “And how did that make you feel?” is a question the SimSensei virtual therapist, shown above, might ask. Image courtesy of USC Institute for Creative Technologies.
The USC platform, SimSensei, monitors pauses in conversations and tracks changes in tone, gaze aversion, and other social cues. SimSensei then processes all these indicators and autonomously determines the appropriate response. This is a risky interaction, Lucas explains. “What if the system told you ‘that’s great’ when you told it your father was dying? That would completely ruin the conversation,” she notes. “So we are really careful. Only if the system is really certain that it understands correctly are those kinds of responses used. It does not use them if there is too much uncertainty in the algorithm.”
When the goal is to have someone interact with a virtual agent over a long period of time, researchers say it is helpful to build in additional layers of interaction, such as exchanging social pleasantries or remembering basic facts about the person, then recalling those in a future exchange.
Bickmore and his colleagues have even experimented with giving their agents biographical details to share, such as where the robot grew up or where it has traveled. “People want to know more about who these agents are,” he says. “Even though they know these stories are fake, the stories increase engagement. There’s nothing that helps people perceive a robotic entity more than if it says the same thing over and over.”
Stories can also help people establish bonds with physical robots, not just virtual ones. Kate Darling, a research specialist at the Massachusetts Institute of Technology Media Lab, and her then-robotics student Palash Nandy set up an experiment in which participants were introduced to a simple toy robot, the insect-like Hexbug. In one case, individuals in the study were shown the robot, then given a mallet and told to strike the toy. Other participants were presented with a slightly different scenario: the robot was accompanied by a snippet of text that provided a story about the Hexbug and its recent behavior in the lab. Then these individuals, too, were asked to hit the toy. The results were clear: Participants hesitated longer, and were less willing to harm the Hexbug, when it had a backstory.
Physical vs. Virtual
While there are some similarities between what makes a physical or virtual robot likeable, the rules often vary. Embodied robots that appear too human-like can make people feel uneasy, an effect known as the Uncanny Valley. Yet this does not always apply to virtual agents. SimSensei has an android-like appearance, and people are open and willing to share with the platform.
Bilge Mutlu, a computer scientist at the University of Wisconsin-Madison, has been exploring these differences in what makes physical and virtual agents appealing.
When two people converse, certain cues, such as breaking eye contact, tend to promote intimacy. So Mutlu conducted an experiment to see whether this would hold true for virtual and embodied robots. In one case, it worked; people disclosed more information when the virtual human in his study broke eye contact during an exchange. But when the physical robot used the same technique, the effect was completely different: people were put off. “They thought the physical robot had intent,” Mutlu says. “They thought it was this intentional, thoughtful being, and they didn’t want to disclose as much.”
Mutlu suggests people might be hardwired to respond differently to virtual and physical entities. Interacting with a virtual robot, in his view, might be more like theater; people experience it almost as they would an interactive play. They assume there is a creator and that the interaction has been designed. Meeting with an embodied robot might be closer to an encounter with an unfamiliar animal: the creature has the ability to violate your space, and you don’t know initially whether it is friendly or dangerous.
“With a virtual application, you’re the one who initiates the interaction,” Mutlu says. “You might be willing to engage and even be vulnerable, but you can walk away and be done. With a physical robot, that’s not clear. There’s you and the robot, and it can cross the divide at any time. You think it’s more of an independent agent, so you are going to have less trust.”
Building Bonds
So how do designers make these platforms more appealing and less threatening?
The simplest route to establishing a bond might be functionality; people like robots when they are useful. Matthias Scheutz, director of the Human-Robot Interaction Laboratory at Tufts University, cites Roomba vacuum cleaners as evidence. These autonomous machines are not particularly cute. They don’t have large eyes or make appealing noises, yet owners become attached to them, because they work. “People are so grateful and think the robot works so hard that they actually clean for the robot,” Scheutz notes. “That’s a big danger. I don’t want people to feel gratitude toward the machine. You’re not thankful to your microwave for heating the meal: that’s what it’s there for. But a robot is perceived as something with goals, and intentions, and an inner life.”
The enormous variety and differing results from the many human-robot interactions being studied today have made it clear that what makes a robot likeable varies from case to case. But another emerging theme is that the field is not just about understanding virtual or physical platforms.
“We put these systems in front of people and have people respond to them, and in those responses we see such interesting things,” says Mutlu. “We’re understanding more about how to build these systems, but we’re also learning so much more about people.”
Bickmore, T. and Picard, R.
Establishing and Maintaining Long-Term Human-Computer Relationships, ACM Transactions on Computer Human Interaction (ToCHI), 2005, 12 (2) : 293–327.
Deng, E., Mutlu, B., and Mataric, M.
Embodiment in Socially Interactive Robots, Foundations and Trends in Robotics, 2019, Vol. 7: No. 4, pp 251–356.
Lucas, G.M., Gratch, J., King, A., Morency, L.P.
It’s Only a Computer: Virtual Humans Increase Willingness to Disclose, Computers in Human Behavior, August 2014, Volume 37, pp 94–100.
Darling, K., Nandy, P., and Breazeal, C.
Empathic Concern and the Effect of Stories in Human-Robot Interaction, Proceedings of the IEEE International Workshop on Robot and Human Communication (ROMAN), 2015.
Arnold, T., and Scheutz, M.
Observing Robot Touch in Context: How Does Touch and Attitude Affect Perception of a Robot’s Social Qualities?, Proceedings of the 13th ACM/IEEE International Conference on Human-Robot Interaction, 2018.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment