acm-header
Sign In

Communications of the ACM

ACM News

Using A.I. to Find Bias in A.I.


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Parity CEO Liz O'Sullivan.

Liz OSullivan, chief executive of start-up Parity, said it had been a challenge to persuade some in the industry to be more concerned about bias in artificial intelligence.

Credit: Nathan Bajar/The New York Times

In 2018, Liz O'Sullivan and her colleagues at a prominent artificial intelligence start-up began work on a system that could automatically remove nudity and other explicit images from the Internet.

They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach A.I. software how to recognize indecent images. But once the photos were tagged, Ms. O'Sullivan and her team noticed a problem: The Indian workers had classified all images of same-sex couples as indecent.

For Ms. O'Sullivan, the moment showed how easily — and often — bias could creep into artificial intelligence. It was a "cruel game of Whac-a-Mole," she said.

From The New York Times
View Full Article

 


 

No entries found