Sign In

Communications of the ACM

ACM TechNews

When It Comes to Robots, Reliability May Matter More Than Reasoning

View as: Print Mobile App Share:
human-robot handshake, illustration

A study by U.S. Army Research Laboratory and University of Central Florida found that human confidence in robots decreases after a robot makes a mistake, even when the robot's reasoning process is transparent.

The researchers explored human-agent teaming to define how the transparency of the agents, such as robots, unmanned vehicles, or software agents, impacts human trust, task performance, workload, and agent perception. Subjects observing a robot making a mistake downgraded its reliability, even when it did not make any subsequent mistakes.

Boosting agent transparency improved participants' trust in the robot, but only when the robot was collecting or filtering data.

"Understanding how the robot's behavior influences their human teammates is crucial to the development of effective human-robot teams, as well as the design of interfaces and communication methods between team members," says Julia Wright of the Army Research Laboratory.

From U.S. Army
View Full Article


Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account