In conversation, mitsuku admits she does not know if her name has any meaning; this is simply what her father called her. Actually, she does not really have a father. She has a Mouse-breaker, which is technically not a person, either, but a team of programmers who like beer and curry and share a fear of Daleks (the evil alien robots from Doctor Who).
Mitsuku is quick-witted, occasionally confusing, and strangely engaging. She is also a chatbot, built from the A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) platform originally developed by Richard Wallace in 1995. She conducts hundreds of thousands of conversations daily, according to Lauren Kunze, principal of Pandorabots, the Oakland, CA-based company behind the technology. "She doesn't really do anything," Kunze says. "She's not designed to assist you. She can tell you the weather or perform an Internet search, but she's really just there to talk to you, and she's wildly popular with teens. People say, 'I love you' and 'you're my best friend.'"
The appeal is not accidental. The designers of chatbots like Mitsuku and the engineers of physical social robots have made significant advances in their understanding of how to build more engaging machines. Yet there are still many challenges, one of which is the unpredictability of humans. "We just don't understand how people are going to react to physical or software robots," says University of Southern California computer scientist Yolanda Gil, chair of SIGAI, ACM's Special Interest Group on Artificial Intelligence. "This is one kind of technology where people continue to surprise us."
While there are no absolute guidelines for building effective social robots or engaging chatbots, a few common themes have emerged.
One frequently cited theory in social robotics is the Uncanny Valley, first described by Japanese roboticist Masahiro Mori in 1970. The Uncanny Valley contends there is a risk in building machines that are too human, that instead of attracting people, realistic androids can have a repulsive effect because of their "uncanny" resemblance to real humans. The reasons for the aversion are varied. Researchers have found evidence that highly capable androids bother people because they represent a threat to human uniqueness, or that on a subconscious level, they actually remind us of corpses.
Ultra-realistic humanoids also generate higher social expectations. "When you have a human-like appearance, people expect a matching level of behavioral realism," says roboticist Karl F. MacDorman of Indiana University. In 2005, for example, MacDorman was studying in the lab of Japanese roboticist Hiroshi Ishiguro when the group tested a socially interactive android at a conference. At first, background noise impeded the machine's speech recognition software, causing a delay in the robot's responses until the scientists added more processors. That initial inhuman delay did not match the lifelike appearance, and the effect was unsettling to early attendees.
Robots that appear too human are not ideal for social interaction, but there also is a downside to being too much of a machine.
While robots that appear too human may not be ideal for social interaction, there is also a downside to being too much of a machine. Ilya Gelfenbeyn, founder of Api.ai, a platform that allows companies to build customized chatbots, says his software can process text and generate replies in just 50 milliseconds. Previously, Gelfenbeyn developed a Siri-like conversational app that was slower and occasionally more difficult to converse with because it needed more time to process speech. The speed and efficiency of his chatbots seemed to be a clear step forward.
To Gelfenbeyn's customers, though, the pace was problematic. When humans message each other, we pause between replies; his chatbots, on the other hand, were responding instantaneously. "You're not used to getting answers immediately," he says, "so one of the requests we've gotten is to add delays; otherwise it feels unnatural."
The particular choices that are made when designing an effective social robot also depend on its function, according to Cory Kidd, CEO of Catalia Health. Kidd spearheaded the creation of Mabu, Catalia's socially interactive robot, which will act as a kind of in-home healthcare coach, encouraging patients to follow their physician-prescribed medication plans. The first Mabu units will go into patient homes this year, and the robot's goal is to keep these individuals out of the hospital.
Mabu does not move or manipulate objects; the small yellow robot consists of a torso, a head with facial features that vaguely resemble Casper the Friendly Ghost, and an interactive tablet. When Kidd and his colleagues were designing Mabu, they knew they needed something that could engage people, build trust, and move into a home in a low-impact way. The eyes were among their first considerations. They tested variations with large eyes reminiscent of anime characters, and others that resembled the digital cast of a Pixar movie. In the end they settled on something in between: attractive, but not so cartoonish that Mabu looks like a doll. The eyes have to be right, Kidd explains, because establishing eye contact is critical. "Psychologically, that's really key to helping draw someone's attention and draw them into a conversation," he says.
Size was another consideration. Too large and the robot would be imposing and potentially threatening, but if it were too small, then people might not be willing to interact with it. Kidd says they also did not want to veer too far from the humanoid form and develop a pet-like machine. "If you build something that looks like a dog or a cat, there's a natural inclination to interact," he says. "But if you're doing what we're doing, building a healthcare coach, well, I'm not going to trust my dog for health advice."
Mabu and other social robots demonstrate an engaging machine does not necessarily need to be humanoid in its appearance or behavior, but an engaging personality is critical. Kidd and his colleagues hired a former Hollywood screenwriter to construct Mabu's backstory. They do not share the story, so this was not some kind of cute public relations play; instead, the purpose was to help them define the robot's responses and reactions, and thereby deepen the interaction with the patient. "If you're going to have this in a patient's home for years, it needs to be a consistent character," Kidd says. "It's part of building up trust with a person. You're giving them something that's believable and credible."
The same holds true for chatbots, according to Tim Delhaes of Inbound Labs, a marketing agency that creates bots for its customers using the api.ai platform. Often, when a company takes on a new client, they have to enter basic information into multiple project management, sales, and marketing platforms. The Inbound Labs chatbot presents one timesaving interface for all that brainless data-entry work. Through a series of questions and answers, the chatbot gathers the information it needs, then updates the different platforms independently. Yet Delhaes says the interaction still has to be enjoyable. "The more you make the bot participate in a natural way with humans," says Delhaes, "the more likely people are going to use it or enjoy using it."
The api.ai platform allowed him to design his own chatbot, and Delhaes based his character on Marvin, the morose robot of The Hitchhiker's Guide to the Galaxy. Then he wrote responses appropriate to that character. So, when a user makes a request, Marvin might answer, "Oh, no, what do you want now?" or simply "Do it yourself." The robot does perform the task eventually, but Delhaes believes its human-like reluctance is part of its appeal.
An engaging machine does not necessarily need to be humanoid in its appearance or behavior, but an engaging personality is critical.
At the same time, he notes he is not trying to trick people into believing they are interacting with a human. "It's obviously fake," he says. "People know it's a robot."
That distinction and clarity is important, according to Kunze of Pandorabots. She argues chatbots should be upfront about their machine status, as this helps build trust. When Mitsuku churns out a confusing response, the bot reminds the subject that "she" is a piece of software, and suggests a few ways to help the software learn and improve its responses.
"All our data indicates that engagement is much, much higher with chatbots that are human-like," says Kunze. "If you're talking with something every day to get things done, you should enjoy who or what you're talking with."
Still, the behavior of the human in the equation is difficult to predict. Even the most carefully designed robots and chatbots will not be appealing to everybody, according to Maartje de Graaf, a social scientist at the University of Twente in the Netherlands. She recently released the results of a study measuring how people interacted with a rabbit-like, home-based social robot called Karotz. The majority of the 102 study participants talked to the robot, and some gave it a name, but others reported an uncomfortable, uncanny feeling when the robot initiated the conversation, and chose to reduce their social interactions.
Kidd says the wide range of patient personalities Mabu will encounter has been a major consideration from the start. The first conversation the robot has with a human will be extremely important in terms of initiating a bond, setting expectations, establishing trust, and more. "We've created a lot of initial conversations," he says, "but for us it depends a lot on the personality of the patient, and how we adapt." Mabu will analyze the person's tone, the content of their response, and even their facial expressions to gauge his or her reaction and generate the appropriate response.
In some ways, the unpredictability of human responses to these machines should be expected, since they are so fundamentally new and unfamiliar, according to social psychologist Maria Paola Paladino of the University of Trento, in Italy. "They are not human, but they're not exactly machines," says Paladino. "They are a different entity."
Towards Artificial Empathy. International Journal of Social Robotics, 6, 1, February 2015.
MacDorman, K. F., and Entezari, S.
Individual differences predict sensitivity to the uncanny valley. Interaction Studies, 16, 2, 141–172. 2015.
Kidd, C.D. and Breazeal, C.
Robots at Home: Understanding Long-Term Human-Robot Interaction. 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.
Kanda, T. and Ishiguro, H.
Human-Robot Interaction in Social Robotics. CRC Press, 2012.
Mabu: A Personal Healthcare Companion https://vimeo.com/138783051
©2016 ACM 0001-0782/16/09
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from firstname.lastname@example.org or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.
No entries found