Sign In

Communications of the ACM

ACM TechNews

How to Make AI Less Biased


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook

The artificial intelligence world is making a strong push to root out bias in AI systems, but it faces some significant obstacles.

Credit: Keith A. Webb/iStock

Academic researchers and the technology industry are working to eliminate bias from artificial intelligence (AI) systems.

IBM's AI Fairness 360 and Google's What-if tool are among the open source packages that can be used to audit models for biased results.

Most researchers say bigger and more representative training sets is the best way to address the issue of bias.

For instance, a spokeswoman for Apple said the company used a dataset of more than 2 billion faces to develop a more accurate facial recognition system to unlock its iPhones.

Another option is to rework machine learning algorithms to generate fairer results.

IMB's Watson OpenScale, for instance, allows lenders to flip gender while leaving all other variables unchanged, to determine whether that changes the prediction from "risk" to "no risk."

Said IBM's Seth Dobrin, "You're debiasing the model by changing its perspective on the data."

From The Wall Street Journal
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


 

No entries found