Sign In

Communications of the ACM

ACM TechNews

AI Researchers Condemn Predictive Crime Software, Citing Racial Bias, Flawed Methods


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Militarized police officers in action.

A coalition of more than 1,000 researchers, academics, and experts in artificial intelligence condemned soon-to-be-published research claims of software purportedly able to predict whether one will become a criminal.

Credit: Getty Images

A coalition of more than 1,000 researchers, academics, and experts in artificial intelligence condemned soon-to-be-published research claims of predictive crime software.

The opponents sent an open letter to the publisher Springer, asking that it reconsider publishing the controversial research.

Harrisburg University researchers Roozbeh Sadeghian and Jonathan W. Korn claim their facial recognition software can forecast whether a person will become a criminal, but the coalition expressed doubts on their findings, citing "unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years."

The letter from the coalition said, "The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalize social hierarchies and legitimize violence against marginalized groups."

From TechCrunch
View Full Article

 

Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account