Supriya Kapur is affiliated with Mailman School of Public Health, Department of Sociomedical Sciences, Columbia University, New York, NY, USA.
Artificial intelligence (AI) models built for clinical tasks show clear patterns of discrimination against patients of color. This discrimination, however, may not show up as an error when training or testing, as the data themselves are likely to be error-ridden, in the form of misdiagnosis or prognosis, for minority patients.
A Letter to the Editor contends that many calls to correct racial bias focus on traditional methods of diversifying datasets to include more minority patients, or even consulting social scientists when creating and evaluating datasets and model performance. While these actions may be helpful to some extent, they fail to address the root cause, which is the incorrect clinical data. Researchers are expected to build models that closely emulate clinical data, which inadvertently include the racial bias.
From Nature Machine Intelligence
View Full Article
No entries found