Credit: The Wall Street Journal
Ethnic and other biases are increasingly recognized as a problem that plagues software algorithms and datasets.9,12 This is important because algorithms and digital platforms organize ever-greater areas of social, political, and economic life. Algorithms already sift through expanding datasets to provide credit ratings, serve personalized advertisements, match individuals on dating sites, flag unusual credit-card transactions, recommend news articles, determine mortgage qualification, predict the locations and perpetrators of future crimes, parse résumés, rank job candidates, assist in bail or probation proceedings, and perform a wide variety of other tasks. Digital platforms are comprised of algorithms executed in software. In performing these functions, as Lawrence Lessig observed, "code" functions like law in structuring human activity. Algorithms and online platforms are not neutral; they are built to frame and drive actions.8
Without proper mitigation, preexisting societal bias will be embedded in the algorithms that make or structure real-world decisions.
Algorithmic "machines" are built with specific hypotheses about the relationship between persons and things. As techniques such as machine learning are more generally deployed, concerns are becoming more acute. For engineers and policymakers alike, understanding how and where bias can occur in algorithmic processes can help address it. Our contribution is the introduction of a visual model (see the accompanying figure) that extends previous research to locate where bias may occur in an algorithmic process.6
No entries found