Where’s the source for this?
How a BC judge is tackling AI hallucinated cases in his courtroom
In March, problematic AI-generated content is alleged to have made it into a court decision for the first time in Canada.
As La Presse reported, the issue arose in a commercial fraud case at the Quebec Superior Court. The matter, which led to a judgment of more than $120 million, is being appealed by the defendants, who allege it includes fake Supreme Court of Canada case law.
Allegations that AI was used have not been confirmed.
“That gave me some pause when I heard about that case,” Justice David Masuhara of the Supreme Court of British Columbia told Verdicts & Voices host Alison Crawford in a recent episode exploring the creep of AI into the court system.
“The way that I think that that would normally be dealt with is, of course, through the leadership of the individual court that's involved. The chief justice obviously would be involved.”
Tackling AI in the courtroom is a situation Masuhara is familiar with — he is widely believed to be the first judge in Canada to encounter AI-generated fake citations in a headline-making case in 2024.
Two AI hallucinated cases, meaning they were completely made up, were cited in the supporting materials of a case he was presiding over, which involved a divorced man who wanted to take his children abroad.
“Obviously, it was still early days, and the concern was with respect to where is this all going to go and some early intervention,” Masuhara says.
In this particular case, the man’s lawyer intended to cite two cases that would have provided valuable precedent for her client. She withdrew the citations when the opposing counsel discovered they weren’t real.
In his decision at the time, Masuhara wrote that competence in the selection and use of any technology tools, including those powered by AI, “is critical.”
Two years on, however, the influence of AI in the courts has only accelerated. Masuhara has presided over two more cases in which AI has been used in submissions or hallucinations have appeared in citations, including one in which a lawyer identified that 20 cases cited in a submission from the other side were AI-generated.
“They couldn't find the cases except for one,” he says.
“It was a judicial review, and the one case that was accurate was that one decision, of course, that kicked off the whole reason for the judicial review.”
The increased use of AI tools in the legal field has meant Masuhara needs to be more cautious when listening to presentations to the court.
“In many cases now, the majority of these cases that are identifying difficulties are from those who are self-represented,” he says.
A self-represented counsel may present something that looks very polished, but could be a “yellow flag” in terms of how this information was generated.
“They are not trained, and they don't have sort of the legal professional obligations of what they present to the court. I think the concern really is just, what are they presenting to the court, and beyond just the discipline that would be required of a professional.”
Masuhara places greater confidence in trained counsel to conduct their own due diligence in verifying documents and sources, including materials presented by opposing counsel. But he remains cautious when dealing with lawyers he may not know.
To help weed out potential AI-generated information, he asks for lists of their documents and the case law they're handing up, so there's a set of decisions rather than just the name of the case and the citation.
“The problem that we see is that the fake cases are fake names, fake citations. If they actually have to go to the second stage of actually finding the case and producing the entire decision, I think that goes some distance in terms of verifying the source.”
So, how much extra work is this creating for judges?
“It's hard to say because sometimes you can deal with it fairly quickly, where you can just point it out and say this doesn't appear to be real,” Masuhara says.
“I think where it becomes an issue that takes some time is the fact that we have to approach these things a bit more cautiously now. It’s taking more time in terms of thinking, okay, does this make sense? Where's the source for this? I haven't heard this line of argument before. … Is this a novel new principle? And where is it coming from? Sometimes the arguments are well developed, then you have to dig a little deeper into the validity of the argument.”
He sees the value of using AI to prepare legal arguments and to enhance productivity and efficiency, but thinks there is a cost to using these tools.
“We need to identify that and recognize how that might impact our judicial function going forward,” Masuhara says.
“What are the courts going to look like 10, 15, 20 years out? And to then sort of shape the responses to the adoption of various forms of artificial intelligence. Otherwise, we'll just be reactive as opposed to being proactive.”
Tune in to the full episode to hear what Justice Masuhara thinks the courts and law societies should be doing to mitigate the potential consequences of AI in the justice system and his concerns about how AI could be “de-skilling” legal professionals.