Sign In

Communications of the ACM

ACM Opinion

Reducing Racial Bias in AI Models for Clinical Use


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Reducing Racial Bias in AI Models for Clinical Use

Regulatory bodies and publishing standards must move away from relying solely on agreement with a clinical dataset for accuracy, and also require evidence of objective accuracy, even if it deviates from clinical data.

Supriya Kapur is affiliated with Mailman School of Public Health, Department of Sociomedical Sciences, Columbia University, New York, NY, USA.

Artificial intelligence (AI) models built for clinical tasks show clear patterns of discrimination against patients of color. This discrimination, however, may not show up as an error when training or testing, as the data themselves are likely to be error-ridden, in the form of misdiagnosis or prognosis, for minority patients.

A Letter to the Editor contends that many calls to correct racial bias focus on traditional methods of diversifying datasets to include more minority patients, or even consulting social scientists when creating and evaluating datasets and model performance. While these actions may be helpful to some extent, they fail to address the root cause, which is the incorrect clinical data. Researchers are expected to build models that closely emulate clinical data, which inadvertently include the racial bias.

From Nature Machine Intelligence
View Full Article


 

No entries found