Sign In

Communications of the ACM

ACM TechNews

How to Make Robots That We Can Trust

Can we trust a robot that makes decisions with real-world consequences?

Having trustworthy, well-working systems is not enough; to enable trust, the design of autonomous systems also needs to consider other requirements, including a capacity to explain decisions and to have recourse options when things go wrong.


The design of trustworthy autonomous systems must entail consideration of the ability to explain decisions and recourse options when things go awry, writes University of Otago professor Michael Winikoff.

"To make a system trustable we need to identify the key prerequisites to trust," Winikoff says. "Then, we need to ensure that the system is designed to incorporate these features."

He says experimentation would be an ideal medium for determining why people would or would not trust autonomous systems.

Winikoff notes the first prerequisite for trustable systems would be for them to explain, in comprehensible terms, why they arrived at certain decisions, enabling people to understand the systems and trust in unpredictable systems and unexpected decisions. Recourse, or having ways to compensate for adverse decisions, constitutes the second prerequisite, if imperfect systems are to be trusted.

Winikoff also says relevant human values--such as privacy, safety, and human autonomy--should be incorporated within the system's decision-making process.

From The Conversation
View Full Article


Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


No entries found