Passer au contenu

Trustworthy criminal AI

A new series of reports analyzes the risks and opportunities of the fast-evolving technology across the Canadian criminal justice system

A graphic depicting AI in the criminal justice system
iStock/reklamlar

In the ongoing conversation about using artificial intelligence, criminal law is usually forgotten.

The Law Commission of Ontario (LCO) has been working to change that and recently released a series of reports outlining the policy concerns surrounding the use of AI in the criminal justice system.

The five issue papers follow a criminal file's life cycle, ranging from AI in law enforcement to AI in sentencing hearings. It’s the culmination of six years of work with government agencies, academics, and the courts to create the most comprehensive summary of AI’s influence in the Canadian criminal justice system. Each issue paper includes a list of questions for discussion, and the goal is to have them serve as a launchpad for finding solutions about what laws and regulations are needed to manage this fast-evolving technology.

The project grew from a roundtable discussion held by the LCO and the Law Society of Ontario in 2018. Seventy-five people from the courts, legal aid, and private practice came together to discuss future trends in the criminal justice system. The one that stood out was the use of AI.

Ryan Fritsch, the LCO’s policy counsel and project lead of the AI criminal justice project

“No one was pulling the threads together about what this means when it comes to risks and how to create a framework to use AI,” says Ryan Fritsch, the LCO’s policy counsel and project lead of the AI criminal justice project.

“We found reports that look at one particular institution, like the courts, or one kind of technology, like drones used by the police. We wanted to create a platform that looks at the entire Canadian criminal justice system.”

The use of technology in the system goes back decades, starting with breathalyzer tests in the 1960s. However, AI has made things much more complex. A 2016 report from ProPublica in the U.S. found that software-based risk assessment tools rated Black offenders as twice as likely to re-offend compared to non-Black offenders.

In 2021, the Privacy Commissioner of Canada found that the RCMP violated the Privacy Act when using Clearview AI, a U.S. facial recognition program, to assist in criminal investigations.

Fritsch says lawyers have a critical role in AI implementation, particularly as procedural fairness issues are often cited as criticisms, from bias to lack of transparency.

“Some AI systems don’t give reasons for how they make decisions,” he says.

“That goes against procedural fairness principles that say you need to have logical reasons for decisions. The best experts for procedural fairness are humans, and lawyers are trained to do this.”

Fritsch wrote the paper “Use of AI by Law Enforcement,” which examines popular tools used by law enforcement, including facial recognition software and predictive policing. Several companies have created algorithms that analyze large datasets of criminal records, location data, and other factors to identify patterns of criminal behaviour. The Calgary and Vancouver police departments, along with several U.S. states, use these programs.

A few police departments, like the Toronto Police Service, have created AI policies, but it’s unclear whether they are effective. Fritsch isn’t sure whether police oversight boards are equipped to understand and monitor AI use. 

Then there’s the issue of how to use AI data in prosecutions. Crown prosecutors have to make decisions about the admissibility of data, disclosure, and privacy concerns, and there’s little guidance on how to tackle these issues.

“We can’t sit back and let AI happen to us,” says Fritsch.

“Leaving it to case law is not a good regulatory model. We have to be proactive and identify the risks.”

Another report in the series, “AI and the Assessment of Risk in Bail, Sentencing, and Recidivism,” covers the growing trend of using AI risk assessments. Several stories in the U.S. have reported that these programs are biased against Black and other marginalized groups. Gideon Christian, one of the report's co-authors and a law professor at the University of Calgary, says AI has absorbed the bias in our criminal justice data.

“There’s no bias-free AI,” he says.

“Data is historically biased. Minority communities are often disproportionately represented — whether underrepresented or overrepresented — in the datasets used to train many AI tools.”

Gideon Christian, a law professor at the University of Calgary

Dealing with AI in criminal trials is a complex issue, and the LCO report on trials and appeals raises the issue of whether current processes dealing with breathalyzers, body cams, and other types of evidence are enough in an AI world.

“Humans tend to have trust in tech,” says Christian.

“They rely on information generated by these tools but don’t understand how that information is generated. We have a false sense of trust with tech.”

The issue is that defence counsel don’t always know how programs like this are being used, and even if they do, getting data from the companies on how the programs work is challenging. Prosecutors have to consider how much weight to give to these programs.

Even with these concerns, Christian believes there’s still a place for AI, including file management and other administrative tasks in court administration.

“You have to identify who is using AI tools and why,” he says.

“Some AI tools serve purely administrative purposes, such as triaging tasks, while others, like facial recognition, have to be analyzed as evidence. We need more education about the bias in the tools and how they work.”

Another report, “AI and Systemic Oversight Mechanisms,” explores how AI should be implemented and how Canadians should decide whether some functions should ever be used. For example, the EU Artificial Intelligence Act prohibits real-time facial recognition except for exceptional circumstances such as terrorism or finding victims of sex trafficking. Law enforcement must get a special order from a judge to get the exception. Canadian policymakers haven’t decided if any AI uses should be prohibited here.

The conversation continues. The deadline for written submissions commenting on issue papers is July 7, and organizations can submit proposals on developing consultation initiatives to reach different stakeholders. Fritsch hopes the focus will remain on how Canada can create its own AI-regulating approach.

“We are using Canadian law as a lens for viewing how AI should be used,” he says.

“We have our Charter of Rights and Freedoms, human rights code and common law. We have to hold AI accountable to our Canadian legal standards.”