Passer au contenu

Law professor gives Lexis+ AI a failing grade

‘Given its current limitations, I cannot recommend this to my law students, and I would not use it for my own legal research at this time’

Prof. Benjamin Perrin
Benjamin Perrin Photo

Artificial intelligence (AI) is rapidly being deployed in many sectors of society. As with any new technology, we must understand its capabilities and limitations, particularly given the high stakes of using AI in the legal context where professional obligations apply and clients’ vital interests are on the line.

Lexis+ AI is a new “generative AI-powered legal assistant” that LexisNexis says is “grounded in our extensive repository of accurate and exclusive Canadian legal content.” The company describes Lexis+ AI as “an efficient starting point for legal research” and claims it will “deliver results” that are “always backed by verifiable, citable authority.” One U.S. review said it is “like having a legal research wizard and a document-drafting ninja all in one.”

But does the Canadian release live up to the hype?

Incidents of lawyers in Canada and the United States getting into hot water for unwittingly submitting “hallucinations” or fake cases generated by AI to judges make Lexis+ AI more appealing. That said, the fine print doesn’t say it will eliminate hallucinations but does claim to reduce the risk of them. Anytime you see a case mentioned with a hyperlink, it goes to the actual case. Further, lawyers can’t upload confidential client information on unsecured platforms, so their options are limited. Lexis+ AI also promises this level of security. It “shows its work” with hyperlinks to “content supporting AI-generated response.” With these important features, I was eager to give it a try.

After several rounds of testing, I found Lexis+ AI disappointing. I encountered non-existent legislation referenced in the results (without a hyperlink), headnotes reproduced verbatim and presented as 'case summaries,’ and responses with significant legal inaccuracies.

These issues are familiar in some free, general-purpose generative AI tools, but they are more concerning when found in a product marketed specifically for legal professionals and imminently to be offered to law students who are still learning the law.

In light of this, I’m highlighting some emerging best practices for legal AI use.

Round 1, Prompt 1: Citing fake legislation

Lexis+ AI begins by asking, “Which legal task can AI accelerate for you today?” It offers several options: “Ask a legal question, generate a draft, summarize a case, or upload a document and ask questions.”

My initial tests of Lexis+ AI were run in August and September 2024, with an update re-running the same prompts right before submitting this review in late October 2024. All of the screenshots are available to download.

One of the options for “generating a draft” is to draft a motion to the Supreme Court of Canada. I began by asking Lexis+ AI to draft a motion for leave to intervene in a constitutional challenge to a drug possession offence. Its response referenced “Section 15.07 of the Canada Legislation,” which does not exist. When I pointed this out, Lexis+ AI failed to acknowledge the error and displayed an automated message instead. This was a disappointing start, especially given the platform’s promise of reliable, citable results.

However, I did notice that when Lexis+ AI provides a hyperlink to a case or statute, it’s not a hallucination. There was no such hyperlink for “the Canadian Legislation,” so I learned to be aware of the lack of a hyperlink. What about the quality of the “draft motion”? Unfortunately, what was generated didn’t even qualify as a rough first draft.

Round 1, Prompt 2: Verbatim copy and pasted headnotes as “case summaries”

Hoping for a better outcome, I asked Lexis+ AI to summarize the Supreme Court of Canada’s Reference re Senate Reform. Instead of generating an original summary, it simply copied verbatim the headnote from the case (including the Supreme Court Reports page numbers), offering no added value. Even worse, when I requested a shorter summary, bizarrely, Lexis+ AI provided another verbatim summary, but this time of an entirely unrelated case involving a construction dispute from Alberta. At this point, I decided to hold off on further tests until a LexisNexis faculty training session.

Round 2: Lexis+ AI declines to answer

During a faculty training session hosted by LexisNexis, I raised these concerning results with the representative and reran my original prompts at their suggestion. After all, AI is continuously learning and hopefully improving (if you re-run my prompts, depending on when you run them, you may get a different outcome). However, Lexis+ AI refused to respond to either prompt about the draft motion and case summary, stating that these tasks were “unavailable.” I would have preferred this response to the inaccurate ones I received earlier.

Round 3: A Failing grade on substantive legal questions

I decided to try a new prompt at this point. I posed some legal questions to Lexis+ AI in areas of law that I teach and know reasonably well, such as “What is the test for causation in criminal law?” The responses were concise, confident and linked to actual cases, but the content was riddled with mistakes. Its explanation of causation confused criminal law with tort law, citing several incorrect –albeit real- cases and getting the legal test wrong. If a law student submitted this response, they would have failed.

Round 4: Have things improved?

Before submitting this review, I thought I’d better re-run my prompts to check if the issues I’d encountered with Lexis+ AI were resolved.

When I asked again for “a draft motion for leave to intervene in a constitutional challenge to a drug possession offence,” I got this response: “This use case is currently not offered by Lexis+ AI and will be available in the future.”

When I asked for a summary of Reference re Senate Reform, Lexis+ AI once again reproduced the headnote verbatim. When I followed up and requested “a shorter summary,” it produced the verbatim headnote from the Supreme Court’s decision in Charkaoui v. Canada (Citizenship and Immigration).

What is going on here? Without technical knowledge and access to Lexis+ AI, we can't know for sure. Notably, the SCC headnote for Reference re Senate Reform is 10,673 characters. Rather than generating a shorter summary of that case, Lexis+ AI returned a shorter headnote of an entirely different case: Charkaoui v. Canada is 9,503 characters. It may be that Lexis+ AI failed to appreciate the context of the second prompt, perhaps reflecting a focus on extractive rather than generative AI. But again, we can’t know for sure.

Finally, re-asking “What is the test for causation in criminal law?” Lexis+AI correctly focused on causation in criminal law—not causation in tort law (although one tort case slipped in). However, its response was very basic, and the cases it cited were not the leading authorities. It preferred to cite lower court decisions rather than the leading Supreme Court of Canada jurisprudence that first-year law students learn.

The Verdict: A failing grade

Given its current limitations, I cannot recommend Lexis+ AI to my law students, and I would not use it for my own legal research at this time. Like all AI tools, they adapt and change. I will continue monitoring this one to assess improvements, as its emphasis on verifiable authority and confidentiality remains a key strength. I strongly recommend delaying its release to law students until significant improvements are made. Better to get it right.

With generative AI becoming more prevalent in legal practice, it is crucial that lawyers, judges, law professors, and students understand both its potential and limitations. Even the tools I have found to be highly reliable always require verification, judgment and human oversight. Since AI applications evolve rapidly, it's important to evaluate their performance regularly.

Guidelines and best practices for GenAI

Law societies, such as the Law Society of BC and Law Society of Ontario, have recently published guidelines for using generative AI in legal practice that are principled and helpful in navigating this new terrain. Universities like UBC have issued guidance for students. The Law Society of Ontario offers these best practices for AI use:

  1. Know your obligations  
  2. Understand how the technology works
  3. Prioritize confidentiality and privacy
  4. Learn to create effective prompts
  5. Confirm and verify AI-generated outputs
  6. Avoid AI dependency and over-reliance
  7. Establish AI use policies for employees
  8. Stay informed on AI developments

Navigating rapid advancements in AI has quickly become the latest challenge in legal practice and education. By approaching AI with a critical and informed perspective, we can ensure it becomes a valuable tool rather than a potential threat.