Science and technology | Computers and criminal justice

Are programs better than people at predicting reoffending?

The short answer is that the two are about the same

Listen to this story.
Enjoy more audio and podcasts on iOS or Android.

IN AMERICA, computers have been used to assist bail and sentencing decisions for many years. Their proponents argue that the rigorous logic of an algorithm, trained with a vast amount of data, can make judgments about whether a convict will reoffend that are unclouded by human bias. Two researchers have now put one such program, COMPAS, to the test. According to their study, published in Science Advances, COMPAS did neither better nor worse than people with no special expertise.

Julia Dressel and Hany Farid of Dartmouth College in New Hampshire selected 1,000 defendants at random from a database of 7,214 people arrested in Broward County, Florida between 2013 and 2014, who had been subject to COMPAS analysis. They split their sample into 20 groups of 50. For each defendant they created a short description that included sex, age and prior convictions, as well as the criminal charge faced.

They then turned to Amazon Mechanical Turk, a website which recruits volunteers to carry out small tasks in exchange for cash. They asked 400 such volunteers to predict, on the basis of the descriptions, whether a particular defendant would be arrested for another crime within two years of his arraignment (excluding any jail time he might have served)—a fact now known because of the passage of time. Each volunteer saw only one group of 50 people, and each group was seen by 20 volunteers. When Ms Dressel and Dr Farid crunched the numbers, they found that the volunteers correctly predicted whether someone had been rearrested 62.1% of the time. When the judgments of the 20 who examined a particular defendant’s case were pooled, this rose to 67%. COMPAS had scored 65.2%—essentially the same as the human volunteers.

To see whether mention of a person’s race (a thorny issue in the American criminal-justice system) would affect such judgments, Ms Dressel and Dr Farid recruited 400 more volunteers and repeated their experiment, this time adding each defendant’s race to the description. It made no difference. Participants identified those rearrested with 66.5% accuracy.

All this suggests that COMPAS, though not perfect, is indeed as good as human common sense at parsing pertinent facts to predict who will and will not come to the law’s attention again. That is encouraging. Whether it is good value, though, is a different question, for Ms Dressel and Dr Farid have devised an algorithm of their own that was as accurate as COMPAS in predicting rearrest when fed the Broward County data, but which involves only two inputs—the defendant’s age and number of prior convictions.

As Tim Brennan, chief scientist at Equivant, which makes COMPAS, points out, the researchers’ algorithm, having been trained and tested on data from one and the same place, might prove less accurate if faced with records from elsewhere. But so long as the algorithm behind COMPAS itself remains proprietary, a detailed comparison of the virtues of the two is not possible.

This article appeared in the Science & technology section of the print edition under the headline "Algorithm’s dilemma"

The new titans and how to tame them

From the January 20th 2018 edition

Discover stories from this section and more in the list of contents

Explore the edition

More from Science and technology

Many mental-health conditions have bodily triggers

Psychiatrists are at long last starting to connect the dots

Climate change is slowing Earth’s rotation

This simplifies things for the world’s timekeepers


Memorable images make time pass more slowly

The effect could give our brains longer to process information