Sign In

Communications of the ACM

ACM TechNews

Bias Test to Prevent Algorithms Discriminating ­nfairly


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Algorithms descriminate, too.

Researchers at the Alan Turing Institute in the U.K. are developing a framework to identify and eliminate algorithmic bias.

Credit: Frances Roberts/Alamy

Researchers at the Alan Turing Institute in the U.K. are developing a framework to identify and eliminate algorithmic bias.

A fair algorithm is one that makes the same decision about an individual regardless of demographic background.

The researchers mapped out different variables in datasets and tested how they might skew decision-making processes. They applied this method to stop-and-frisk data from the New York City police department from 2014, modeling variables that influenced police officers' decisions to stop someone.

The team analyzed the skin color and appearance of detained people, and they found police generally saw African-American and Hispanic men as more criminal than they did white men, a conclusion that could lead a machine-learning analysis to deduce that criminality is correlated with skin color.

The researchers think this method could be applied to other organizations that are required to keep their processes free from discrimination.

From New Scientist
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found