Credit: Fran_Kie / Shutterstock
We now trust many computer systems—hardware running deterministic, hard-coded software programs—to be much more reliable than they once were. And societally, we now have more of a sense of how to hold those who develop and operate software accountable when their products fail. Nevertheless, we are currently amidst a new set of concerns about the trust we grant to algorithmic systems—computational routines that "learn" from real-world data—to accomplish important tasks in our daily lives. While developers of algorithmic systems rightfully strive to make their products more trustworthy, and face room for improvement1,12 "trust" is an insufficient frame for the relationship between these technologies and the impacts they produce. Given the properties of machine learning and the social, political, and legal structures of accountability in which they are enmeshed, there currently unresolvable uncertainties about how such systems produce their results, and who should be held accountable when those results produce harm to individuals, communities, or all of society.
This uncertainty about how, whether, and when algorithmic harms come to pass is not ever going away, at least not completely. So, we remain in need of mechanisms to address both whom to hold accountable and how to hold them accountable, rather than to rely on them to make themselves more trustworthy.
No entries found