Sign In

Communications of the ACM

ACM Careers

A Psychological Approach to Human-Automation Interaction

View as: Print Mobile App Share:
Nathan Tenhundfeld of The University of Alabama in Huntsville

"Humans don't like to relinquish control" to automated systems, says Assistant Professor Nathan Tenhundfeld at The University of Alabama in Huntsville.

Credit: Michael Mercier / UAH

It's called the uncanny valley. Fans of the HBO show "Westworld" or viewers of the movie "Ex Machina" may already be familiar with the phenomenon. For those who are not, it's essentially a notion that humans are comfortable with robots that have humanoid features, but are less comfortable with robots that look almost but not exactly like a human.

For Nathan Tenhundfeld, however, the uncanny valley is just one of many factors he must take into account while researching human-automation interaction as an assistant professor in the Department of Psychology at The University of Alabama in Huntsville.

"We're at a unique point with the development of the technology where automated systems or platforms are no longer a tool but a teammate that is incorporated into our day-to-day experiences," Tenhundfeld says. "So we're looking at commercial platforms that offer the same systems but in different forms to see whether a certain appearance or characteristic affects the user and in what way."

Take for example, the recent push by the U.S. Department of Defense to incorporate automation into warfighting. As a concept, it makes sense: the more robots fight wars, the less cost in human life. But in practice, it's more complex. What should a warfighting robot look like? A person? A machine?

To answer these questions, Tenhundfeld has partnered with a colleague at the U.S. Air Force Academy, where he conducted research as a postdoctoral fellow, to use "a massive database of robots" so that they determine how various components might affect the perception of a robot's capabilities. "We want to know things like, does a robot with wheels or a track fit better with our expectation of what we should be sending to war versus a humanoid robot?" Tenhundfeld says. "And, does having a face on the robot affect whether we want to put it in harm's way?"

Even if there were easy answers — which there aren't — there's another equally important factor to consider beyond the robot's user interface: trust. For a robot to be effective, the user must trust the information that it is providing. To explain, Tenhundfeld points to research he conducted on the Tesla Model X, which is described in "Calibrating Trust in Automation Through Familiarity With the Autoparking Feature of a Tesla Model X," published in the Journal of Cognitive Engineering and Decision Making. Looking at the car's autoparking feature specifically, he and his team wanted to determine the user's willingness to let the car complete its task as a function of their risk-taking preference or confidence in their abilities.

"The data suggest automated vehicles tend to be safer than humans, but humans don't like to relinquish control," he says with a laugh. "So we had this pattern where there were high intervention rates at first, but as they developed trust in the system — after it wasn't so novel and it started to meet their expectations — they began to trust it more and the intervention rates went down."

The flip side of that coin, however, is the potential for empathy in, or attachment to, a particular automated system in which users may have developed trust. To illustrate this concept, Tenhundfeld recounts a case study of explosive-ordinance disposal teams who employ robots to safely blow up bombs. "When they have to send the robots back to get repaired, they have an issue when they're given a different robot," he says. "So they've placed this trust in a specific robot even though the intelligence/capability is the same across all of the robots."

And lest it start to sound like there is already more than enough for Tenhundfeld to factor in, there is also situational trust, which sits somewhere between trust and overtrust. In this scenario, users may develop a certain level of trust as a whole over time, but then realize they don't trust some aspects as much as others. "Say I have an automated system, or robot, providing intelligence in a mission-planning environment, and it screws that up," he says. "I might not trust it in a different environment, such as on the battlefield, even though it has a different physical embodiment for use in that environment, and may be distinctly capable on the battlefield."

In short, the increasingly digital nature of the world introduces a seemingly endless list of considerations when it comes to ensuring automated systems can successfully meet human needs — all of which Tenhundfeld must take into account with the research he is doing in his Advanced Teaming, Technology, Automation, and Computing (ATTAC) Lab. It's a challenge that he and his fellow researchers have embraced. "Businesses are focused on being first to market with a product," he says. "We help them improve the product so that it works well for the user."


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account