Skip to Content

Before we automate criminal justice, we need to understand it

The problem is not just that AI might make bad decisions -- it’s that we don’t agree on what a good decision looks like

A graphic depiction of the criminal justice system
iStock/cnythzl

Artificial intelligence is coming for the Canadian criminal justice system. 

Much of the conversation around the anticipated change currently focuses on a familiar list of concerns, including algorithmic bias, lack of transparency, and questionable reliability. While these challenges are important, they seem resolvable, at least in principle, through better technology and regulation. 

However, beyond these concerns lies a more intractable problem that goes to the heart of our criminal justice system. The fundamental question we must grapple with before AI can be ethically implemented is: What are we trying to achieve with the Canadian criminal justice system?

AI is supposed to help us better achieve our goals. Yet, without anything close to a consensus on what “better” means in the context of criminal justice, it will be difficult to know whether AI is helping.

Fortunately, Canada has been slower to adopt AI in the criminal justice system than other jurisdictions. The United States has been experimenting with AI at virtually every stage of the criminal process, including in policing, bail, sentencing, and parole decisions. While some Canadian police agencies are beginning to use AI tools for investigations, the technology has not yet been implemented in our most consequential decision-making stages. As University of British Columbia Professor Benjamin Perrin has noted, Canada’s delay means that hopefully we can learn from some of the pitfalls and problems other jurisdictions have experienced.

Whether it comes sooner or later, AI will likely have transformative effects on our legal system. Recognizing this, the Law Commission of Ontario recently released a major report on AI in Canada’s criminal justice system. It highlights risks with AI in criminal justice related to bias, privacy, disclosure, transparency, accuracy, and oversight. The report also echoes Perrin’s concern that Canada has no legal framework governing AI in the context of criminal justice. 

READ MORE: Trustworthy criminal AI

 

In general, integrating AI in this realm raises two different kinds of ethical challenges.

The first arises in parts of the system with relatively broad agreement over their purpose. 

Here, the question is whether AI can reliably help achieve that purpose without undermining other legal values. These applications are easier to evaluate. For example, AI could help automate routine procedural steps or manage backlogged cases. It could also help analyze past cases and evidence to flag possible wrongful convictions. 

While the Law Commission has raised important concerns in this area, such as transparency and reliability, it’s at least possible to determine whether the technology is helping. As University of Toronto Professor Vincent Chiao suggests, AI can help streamline “high volume, low severity” cases that have clogged our courts. These tools would provide a measure of certainty, predictability, and even-handedness by expediting uncontroversial but inefficient aspects of our system.

A more controversial use of AI would be to improve the accuracy of risk assessments, which figure prominently in bail decisions. These tools raise serious ethical issues, especially given their track record of racial bias in the U.S. However, it’s at least possible for us to assess whether AI is better or worse than human judges at predicting the likelihood the accused will commit an offence if released.

The second kind of ethical challenge is more fundamental. It concerns parts of the process where there isn’t agreement over what we’re trying to achieve. As Chiao has noted, the biggest issue is that at some level, we don’t know what we want from the justice system. Or, perhaps, we want a lot of different, partially conflicting things, and are unwilling or unable to set priorities in any reasonably clear and systematic way. Without a clear sense of the purpose of some parts of the system, it’s difficult to know how we should regulate or evaluate proposed AI interventions.

Sentencing is a prime example. The Criminal Code lists a half-dozen different purposes of sentencing, many of which are in tension or point in opposite directions. Are we aiming for deterrence, accountability, denunciation, rehabilitation, incapacitation, or justice for the victim? Without consensus on this question, it will be hard to know what outcomes AI sentencing tools should be optimized for or how to evaluate whether a given tool is helping.

The same problem applies to other official decisions that involve significant discretion, including whether to proceed with a prosecution. An AI tool could conceivably predict whether there’s a reasonable prospect of conviction. However, it’s difficult to imagine how it could reliably determine whether a prosecution is in the public interest. That’s because the idea of public interest in this context is not clearly understood or agreed upon. Without answers to this and other fundamental questions, we risk introducing AI which we have no principled way of evaluating. Or worse still, we risk importing AI that reflects the values and goals of other jurisdictions, like the U.S.

Perhaps this uncertainty suggests that Canada should follow the EU’s AI Act by introducing “no-go” zones that prohibit AI in high-stakes contexts in the criminal process. As Perrin has noted, this could prevent some of the technology’s most unacceptable risks, but it wouldn’t resolve the underlying problem.

The problem is not just that AI might make bad decisions. It’s that we don’t agree on what a good decision looks like. Until we clarify what we want from our criminal justice system, we won’t be able to design or evaluate AI tools effectively. 

AI can only take us forward if we know where we’re trying to go.