Sign In

Communications of the ACM

ACM TechNews

Testing Trust in Autonomous Vehicles By Fooling Human Passengers


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
GPS Navigator

Credit: Ziff-Davis

Researchers have few options when it comes to studying how people react to being driven by autonomous cars. They can use simulations, but subjects know they are not in any real danger, which prevents accurate testing of their reactions. Researchers also can put subjects in real autonomous cars on public roads, but there could be legal issues as well as safety issues in testing specific types of situations to assess reactions and trust. However, a team at Stanford University's Center for Design Research has come up with a third option called the Real Road Autonomous Driving Simulation (RRADS). The approach works by fooling human participants into thinking they are in an autonomous car when they are really not, without lying to them.

RRADS is based on a regular car, driven by a human researcher, with a partition that prevents the passenger or subject from seeing the driver. Although the subjects signed a consent form stating the vehicle would be operated by a licensed driver at all times, many assumed it was driving itself. "This provides a lens onto the attitudes and concerns that people in real-world autonomous vehicles might have, and also points to ways that a protocol that deliberately used misdirection could gain ecologically valid reactions from study participants," the researchers say.

From IEEE Spectrum
View Full Article

 

Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account