The fatal collision of two trains on Washington, D.C., Metro's Red Line on June 22 may come to symbolize the core problem of automation, which is the relationship between humans and their automated control systems. "The better you make the automation, the more difficult it is to guard against these catastrophic failures in the future, because the automation becomes more and more powerful, and you rely on it more and more," says University of Wisconsin at Madison professor John D. Lee. As such systems become more reliable, the greater the likelihood that supervising humans will become less focused, which makes it increasingly probable that unanticipated variables will tangle up the algorithm and lead to disaster.
The University of Toronto's Greg Jamieson notes that many automated systems explicitly instruct human operators to disengage, as they are designed to remove human "interference." "The problem is when individuals start to overtrust or over rely or become complacent and put too much emphasis on the automation," he says.
Lee, Jamieson, and George Mason University psychologist Raja Parasuraman say there is growing agreement among experts that automated systems should be designed to augment the accuracy and performance of human operators rather than to replace them or make them complacent. A number of studies illustrate that operators can retain their alertness and skills through regular training exercises in which they switch from automated to manual control. Parasuraman has determined that "polite" feedback from a machine can enhance the machine-operator relationship to facilitate measurable safety improvements.
From The Washington Post
View Full Article
Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA
No entries found