Algorithms that predict crime need more public scrutiny

Par Yves Faguy août 8, 20178 août 2017

Algorithms that predict crime need more public scrutiny
Photo by Lin Zhizhao on Unsplash

 

The predictive value of algorithms in criminal matters is obviously a controversial one.  Last year, the not-for-profit ProPublica newsroom published an investigative piece arguing that there is racial bias in a tool called COMPASS, used by courts in bail sentencing to predict the likelihood of people reoffending.

The case study found that black defendants are more likely to be incorrectly labeled high risk and white defendants low-risk, in large part because the algorithm itself tends to reflect existing social inequality and therefore reinforcing the bias. Ultimately, the study found that risk scores were unreliable in forecasting violent crime. The Chicago Police Department's Strategic Subject List, commonly called the Heat List, has also come under attack for its reliance on an algorithm that critics charge is assigning risk scores in an overly simplistic manner and without proper transparency (often because the owner of the predictive software will cite proprietary technology as a reason not to share details of its inner workings).

There has been some pushback against the ProPublica story, from Northpointe, the company that developed COMPAS, and others.  There is also a view, seemingly backed up by some evidence that suggests that relying on predictive software can reduce racial bias, a point raised by Benjamin Alarie in a recent interview with CBA National.

David Colarusso, a data scientist, has an interesting post up on what a Portland experiment on predicting criminal activity can teach us about the limits of algorithms, namely that our obsession with accuracy in our predictive algorithms is probably the “wrong metric for success”:

Why the concern? Because the distribution of winners and losers is probably unbalanced. Consider SCOTUS cases. A set of legal scholars polled for their predictions on SCOTUS outcomes came in with an accuracy of 59%, but it is well known that in the modern Court most cases are reversed. About 63% of cases are decided in favor of the petitioner. So a simple algorithm that always predicts the petitioner as the winner is accurate 63% of the time, beating our legal scholars. You can be accurate without being insightful.

Medicine has known this for quite some time. If one in twenty people have a disease and you have a “test” for this disease that always comes back negative, it will be 95% accurate and 100% worthless. So when discussing the efficacy of such tests, medical professionals tend to speak of sensitivity and specificity, measures that focus on the number of true positives and true negatives, the logical siblings of false positives and false negatives.

Data scientists often use a similar pairing, precision and recall.

The takeaway? You can be right for the wrong reasons, and accuracy isn’t always “accurate” enough. In fact, there is a whole constellation of measures used to assess the quality of a prediction/classification. The trick is choosing the right subset for the case at hand.

Colarusso’s broader point is that predictive algorithms in criminal cases (and civil cases too for that matter)   may yield some benefits, but only in a system that is demonstrably fair, which is why in our justice system predictive algorithms must be made public and open to scrutiny.

 

Catégories:
Comments
No comments


Leave message



 
 Security code