Passer au contenu

What made-in-Canada legal AI can and can’t do well

Some of the technology’s promise is coming true

An illuminated light bulb
iStock/MicroStockHub

Many of us have been waiting for the day when a Canadian legal database will be harnessed to the power of an AI language model like ChatGPT.

How great would it be to ask: “Find me a case where someone does X, resulting in Y, and the court grants judgement, acquits, or excludes the evidence…” Imagine never having to anguish over finding just the right Boolean search string or trawl through 30 cases to find one or two on point.

If AI could do this, it would save us loads of time, and time is money. The AI might summarize cases and maybe even draft an argument we could hand to the judge or craft an opinion letter to a client. We would certainly need to check it over, tweak and refine it, but AI could make some of the hardest parts of legal writing so much easier.

Well, the long wait is over, and the verdict is in. Over the last year, a host of platforms have emerged in Canada that purport to do precisely what I’ve described here—including what may be the flagship platform, LexisNexis’ Lexis+ AI. How well does it work? What can it do well, and what can’t it do?

The news is good and bad. Understanding how it works is key to understanding why it can’t and won’t ever be able to do research the way we’re all hoping. But there are things it can do well, and they’re huge.

First, the bad news.

When I attended a demo of Lexis+ AI, I thought I was seeing pure magic unfold. A company rep plugged in a canned prompt — a fact pattern followed by a discrete legal question — and out came a wonderful, concise 600-word mini-memo on point, beginning with statutory provisions, followed by case law and even an article from a law journal. It was dazzling.

When I gained access (it’s free to profs and law students), I ran various queries in areas of the law I know well. The results were telling. In one case, I asked Lexis+ AI to find five cases where a person is arrested and his backpack is searched at the police station. Courts have held that a search can take place long after an arrest and still be lawful, depending on the reason police carried it out. I wanted to be spared having to wade through 30 cases that included the words “station” and “backpack.”

The app gave me a mini-memo — just as in the demo — but it was decidedly unhelpful. The five cases it came up with contained the words “station” and “backpack,” but none involved a search “at the station.” This is when the penny dropped: Lexis+ AI isn’t reading the cases and legal AI doesn’t work the way I hoped it would.

What’s really happening here is obvious. Lexis+ AI isn’t trained on Canadian cases. The chatbot simply converts your plain English prompt into a Boolean search, runs the search, and then plugs the results back into the chatbot to present a nicely written mini-memo. What it isn’t doing is reading the cases for you. We’re still stuck with Boolean searches for that.

Even a language model trained on cases wouldn’t do the trick: it would simply generate fake cases that sound like real cases it was trained on.

Now for the good news.

A host of tools, including Google’s free NotebookLM (as well as Lexis+ AI), allow you to upload documents that you can use to prompt a language model to work on. I have found this enormously useful. Using NotebookLM, I have uploaded my lecture notes and instantly generated three-page summaries for my students. I need to tweak and revise them, sometimes significantly. But having a first draft is a huge time-saver.

I’ve uploaded cases, journal articles and slide decks and generated stunningly good summaries and overviews. I could easily see litigators prompting one of these tools with a fact pattern or a will-say and asking it to draft questions for cross-examination. It could summarize a handful of cases and generate at least a first draft of an opinion or a memorandum of argument. It could do wills and contracts and much more.

In all of these cases, you will still need to do a lot of editing, and there will be subtle errors in the summaries of law. Nuances will be lost or misstated. But I am finding time and again that the hardest stage in many of these tasks — getting off the ground with a first draft — is immensely easier with AI. The one thing it does really well is act upon the material you provide — as long as you are very specific in your prompt.

So, while it isn’t perfect, AI is making some of our work much easier. Some of its promise really is coming true.