A study by the University of Michigan's Ben Green and Harvard University's Yiling Chen suggests pretrial risk assessment algorithms increase the likelihood that judges will change their priorities in making pretrial decisions.
The algorithms use data on previous defendants' outcomes to make forecasts on the given arrestee, presenting them either as a numerical score, or designating them high-, medium-, or low-risk for failure to appear in court, or being arrested again.
The researchers found viewing the algorithms' predictions caused participants to consider factors differently and to more highly prioritize the risk of defendants' failure to appear or getting re-arrested, with the result of more inequitable sentencing (since Blacks were more likely to be deemed higher-risk defendants, and to receive harsher decisions than Whites).
From Government Technology
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found