In 1970, robotics expert Masahiro Mori first described the effect of the “uncanny valley,” a concept that has had a massive impact on the field of robotics. The uncanny valley, or UV, effect, describes the positive and negative responses that human beings exhibit when they see human-like objects, specifically robots.
The UV effect theorizes that our empathy towards a robot increases the more it looks and moves like a human. However, at some point, the robot or avatar becomes too lifelike, while still being unfamiliar. This confuses the brain’s visual processing systems. As a result, our sentiment about the robot plummets deeply into negative emotional territory.
Yet where the uncanny valley really has an impact is on how humans engage with robots in modern times, an impact that has been proven to change how we see human-like automatons.
In a 2016 research paper in Cognition, Maya Mathur and David Reichling discussed their study of human reactions to robot faces and digitally composed faces. What they found was that the uncanny valley existed across these reactions. They even found that the uncanny valley effect influenced whether or not humans found the robots and digital avatars trustworthy.
“How the uncanny valley has already impacted the design and direction of robots is clear; it has slowed progress,” says Karl MacDorman, a professor of human-computer interaction at Indiana University–Purdue University Indianapolis (IUPUI). “The uncanny valley has operated as a kind of dogma to keep robot designers from exploring a high degree of human likeness in human–robot interaction.”
To MacDorman and others, the uncanny valley must be dealt with in order to accelerate the adoption of robots in social settings.
More Human, More Problems
For clues as to why, researchers Christine Looser and Thalia Wheatley, then both of Dartmouth College, in 2010 evaluated human responses to a range of simulated faces. The faces ranged in realism from fully human-like to fully doll-like. The researchers found participants stopped viewing a face as doll-like and considered it human when it was 65% or more human-like.
Companies that develop robots now consider findings like this and take active steps to stop the UV effect from impacting how the market receives their technology. One way they do that is by sidestepping the uncanny valley entirely, says Alex Diel, a researcher at Cardiff University’s School of Psychology who studies the uncanny valley effect.
“Many companies avoid the uncanny valley altogether by using a mechanical, rather than a human-like, appearance and motion,” says Diel. That means companies intentionally remove human-like features, like realistic faces or eyes, from robots—or engineer their movements to be clearly non-human.
One example of this approach is the Tesla Bot, a concept robot unveiled by the electric car manufacturer. While humanoid, the robot has been designed without a face, which makes certain the human brain’s facial processing systems will not see it as a deviant version of a human face, says Diel.
Another way companies mitigate the effect of the uncanny valley is by designing robots to be cartoon-like, which helps them appear humanlike and appealing, without becoming too realistic. Diel points to Pepper, a congenial-looking robot manufactured by SoftBank Robotics, as a product that takes this route.
“Cuteness can’t be overrated,” says Sarah Weigelt, a neuropsychologist researching neural bases of visual perception at the Department of Rehabilitation Sciences of TU Dortmund University in Germany. “If something is cute, you do not fear it and want to interact with it.”
If companies can’t make a robot cute, they’ll often make obvious that it’s not a human in some other way. Some companies do this by changing skin tones to non-human colors, or leaving mechanical parts of a robot’s body intentionally and clearly exposed, Weigelt says. This averts any confusion that this strange object could be human, sidestepping the UV effect.
While companies work hard to avoid falling into the valley, sometimes they try to pass through the valley and climb out the other side by making robots indistinguishable from humans. However, this presents its own set of problems, says MacDorman.
“As a robot’s appearance becomes more human, people expect more from the robot,” he says. “They expect interaction closer to a human level of performance.” If the robot cannot deliver that level of performance, MacDorman says, it could be harmful to the company’s reputation and revenue.
The uncanny valley does not just hurt a company’s market performance; it can also impede funding for robotics companies just getting started. MacDorman has worked in robotics labs in Japan, where robots are widely accepted by society and the government. In fact, robots are integral to Japanese society, sometimes acting as caregivers in a society with an aging population and depleting demographics.
Because Japanese society is so accepting of robots, the uncanny valley becomes even more dangerous. MacDorman says anything that harms the public’s perception of robots is anathema. Government agencies are reluctant to fund projects that stray too close to the uncanny valley.
Attempting to Cross the Valley
However, it’s not all doom and gloom. Some use-cases for robots don’t need to worry about the uncanny valley.
“Companies that consider the uncanny valley are likely to be companies involved in the production of social robots,” says Conor McGinn, CEO of Akara Robotics, which makes cleaning robots for frontline workers at places like hospitals.
“Robotics companies that develop platforms for more utilitarian tasks, like autonomous food delivery or factory logistics, are much less likely to consider it.”
The data would seem to back up the idea. Robot orders in North America were up 67% in Q2 2021 versus Q2 2020, according to the Association for Advancing Automation (A3). More than half of those orders were outside typically robot-heavy automotive manufacturers.
There’s good reason to believe more human-like features can actually make social robots more effective. “When people anthropomorphize a robot, they rationalize its behavior in human terms using social cues given off by it,” says McGinn. “This tendency can be leveraged by robot designers to make it easier for people to understand the robot’s behavior.”
He gives the example of a company that has developed a robot receptionist. Adjustable facial expressions, which might at times dip into the uncanny valley, could be useful to signal to people approaching the receptionist desk that they have been recognized.
According to Hadas Kress-Gazit and colleagues writing in the September 2021 issue of Communications, it is very important for social robots to follow social norms. However, the authors note, “One major challenge is how to encode social norms and other behavior limitations as formal constraints” within a robotic system.
Social robots clearly require some level of humanity—but will the uncanny valley always be lurking as a threat to social robotics companies?
It is quite possible we’ll creep right up to the edge of the valley, says McGinn. There is an argument to be made for developing social robots that are just lifelike enough to communicate fluently with humans and follow social norms, yet retain enough of the artificial to avoid the uncanny valley effect.
This is already happening with popular commercially available robots like Sony’s Aibo and SoftBank’s Nao and Pepper, says MacDorman. “They have the basic features desired in a social robot—a torso, arms, and a head with eyes and a mouth and the ability to express emotion,” he says. “This could be enough for many applications.”
Not to mention the cost-savings of not making the most lifelike robot possible are significant. That brings up the question of economics, which humanlike robots run into often.
“Realistic robots are good for specific environments where the cost is justified, such as patient simulators in medical schools,” MacDorman says. “The realism of the robot helps students train so when they work on real people, they feel like they’ve already done it—and their performance improves.”
That means we’ll see humanlike robots in areas where the costs are justified, regardless of the uncanny valley effect. Even then, we might still find reasons to not accept those robots, MacDorman says. For instance, a robot might one day be indistinguishable from a human being, but the user might still not accept it because it resembles an ex-partner with whom they recently had a breakup.
“The uncanny valley isn’t just about human likeness or life-likeness and acceptance, but the experiential quality of uncanniness, coldness, and the cognitive processes that cause a loss of empathy,” says MacDorman.
In other words, we should worry less about the uncanny valley and more about moving forward with robots that actually connect with humans.
Ultimately, that may happen simply with the passage of time. We are likely to just get used to robots with very life-like human features, says Diel, in the same way their designers get used to them, although he concedes that may take some time. Robots would need to become much more prevalent in our everyday life first.
“They’ll stop seeming like they deviate from the norm once they are part of the norm,” Diel says.
Weigelt agrees. She fully expects our relationship with lifelike robots to change over time as they become more commonplace.
“The uncanny valley effect will change in the future. What makes us shudder these days might not in the future.”
Robot Orders Increase 67% in Q2 2021 Over Same Period in 2020, Showing Return to Pre-Pandemic Demand for Automation, Automation.com, Oct. 19, 2021, https://bit.ly/3I2VNW8
Diel, A. et. al.
A Meta-analysis of the Uncanny Valley’s Independent and Dependent Variables, ACM Digital Library, Oct. 18, 2021, https://dl.acm.org/doi/10.1145/3470742
Hsu, J.
Why “Uncanny Valley” Human Look-Alikes Put Us on Edge, Scientific American, Apr. 3, 2012, https://www.scientificamerican.com/article/why-uncanny-valley-human-look-alikes-put-us-on-edge/
Kress-Gazet, H. et al.
Formalizing and Guaranteeing Human-Robot Interaction, Communications, September 2021, https://cacm.acm.org/magazines/2021/9/255045-formalizing-and-guaranteeing-human-robot-interaction/fulltext
Looser, C. and Wheatley, T.
The Tipping Point of Animacy: How, When, and Where We Perceive Life in a Face, Psychological Science, Nov. 19, 2010, https://pubmed.ncbi.nlm.nih.gov/21097720
Maithur, M. and Reichling, D.
Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley, Cognition, January 2016, https://bit.ly/3uZnjjO
Join the Discussion (0)
Become a Member or Sign In to Post a Comment