Sign In

Communications of the ACM

ACM Opinion

Should Oversight of Biased AI Be Left Up to People?


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
headshot of researcher Ben Green

"Assumptions about human oversight are playing a really critical role in justifying the use of these tools. If it doesn't work, then we're failing to get any of the protections that are seen as essential."

Ben Green is a postdoctoral scholar at the University of Michigan and an assistant professor at the Gerald R. Ford School of Public Policy.

Awareness of bias hasn't stopped institutions from deploying algorithms to make life-altering decisions about, say, people's prison sentences or their health care coverage. But the fear of runaway AI has led to a spate of laws and policy guidance requiring or recommending that these systems have some sort of human oversight, so machines aren't making the final call all on their own.

The problem is: These laws almost never stop to ask whether human beings are actually up to the job, says Ben Green in an interview.

From Protocol
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account