We live in a world where machines increasingly make decisions that impact people's lives. Nowhere is the intersection of algorithms and human thinking more consequential than in the criminal justice system, where software predicts the likelihood defendants will skip bail or commit other offenses.
These scores determine who will be set free and who will go to jail, including how long they will be locked away. "There's a growing belief that we can help humans with difficult predictive tasks through data-driven algorithms," says Jens Ludwig, Edwin A. and Betty L. Bergman Distinguished Service Professor and director of the University of Chicago's Crime Lab.
The use of such tools raises vexing questions, however. There's ample evidence that judicial risk assessment software incorporates various biases, including racial prejudice. On the other hand, there's also evidence that the software can outperform humans in making crucial decisions about risk.
Says Joshua Simons, a graduate fellow at Harvard University, "Part of the problem is that both proprietary software and judges are black boxes."
Trials and Tribulations
Risk assessment software is supposed to strip away the human biases that lead to unequal application of laws. Nearly every U.S. state now uses some type of predictive system to handle bail and sentencing decisions. Yet, the makers of many tools in this arena, including a widely used proprietary software program called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), refuse to reveal how they generate risk scores.
This is far more than an abstract concern. A 2016 ProPublica study of more than 10,000 individuals in Broward Country, FL, found that COMPAS labeled black defendants risky at nearly twice the rate as white defendants. Even when ProPublica isolated the effect of race and recidivism, along with age and gender, black defendants were flagged as 45% more likely to commit a future crime. ProPublica also pointed to specific cases in which arrests for essentially the same petty crime resulted in drastically different risk scores based on the alleged perpetrator's skin color.
Not surprisingly, this has fueled a backlash. Says Jamie Lee Williams, staff attorney at the Electronic Frontier Foundation (EFF), "Pretrial risk assessment tools can replicate the same sort of outcomes as existing systems that rely on human judgment and make new, unexpected errors. If these tools are to be used, real limitations are needed to protect against due process violations and unintentional, unfair, biased, or discriminatory outcomes." In line with this, EFF has proposed restrictions on the use of scoring systems.
A 2019 Electronic Privacy Information Center (EPIC) position paper took things a step further. "Many criminal justice experts have denounced 'risk assessment' tools as opaque, unreliable, and unconstitutional," the paper noted.
Judgement Day
At the heart of the issue is a straightforward yet complex concept: while laws forbid discrimination, understanding whether a judge or a machine has discriminated is a daunting task. Ludwig argues that machines merely expose the problem; humans are always behind discrimination. "When algorithms are involved, proving discrimination will be easier—or at least it should be, and can be made to be," Ludwig and a group of researchers noted in a 2018 academic paper.
In order to better understand the relationship of humans and computers, Ludwig's group ran a policy simulation to compare how judges and machines performed on more than 750,000 cases in New York City from 2008 to 2013. When the computer decided a case, defendants were 25% less likely to run afoul of the law while awaiting trial. "It showed that the algorithm does a better job identifying who is truly high-risk," he says.
But the experiment pointed to other benefits. "The implication," Ludwig says, "is that by using machine predictions we could reduce jailing rates by more than 40% without any increase in skipped court appearances or other adverse outcomes like re-arrest. This would have the biggest beneficial impact on African Americans and Hispanics, who account for 90% of the jail population."
A Winning Score
The goal, Ludwig says, should be to design software that reduces societal risk while meting out justice with an even hand. He argues that the starting point is to make changes to the legal framework. "Current regulations were designed to deal with human bias. The way we'd detect and remediate algorithmic bias is different from humans. This means that in addition to developing new laws and regulations properly suited to the technology, algorithms must also be open to inspection and investigation." His group is currently partnering with New York City to produce open source risk assessment software.
The deeper issue, Simons concludes, is a need for society to examine why data constantly validates enormous inequality in the judicial system. "We need to ask, what are the social inequalities, the problems, the injustices, the civil rights concerns that cause machine learning to pick up these concerns?"
Samuel Greengard is an author and journalist based in West Linn, OR, USA.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment