Skip to Content

Will legal products of the future be accessible to Canadians?

Thinking through the implications of AI for access to justice.

Ryan Fritsch

ChatGPT is barely five months old but has already reframed how the profession discusses legal-tech and access to justice. And little wonder why. To paraphrase science fiction author Arthur C. Clarke, the AI-powered text generator appears sufficiently advanced to be indistinguishable from magic. 

The latest version – ChatGPT4 – has already demonstrated its legal chops by scoring in the 90th percentile on the LSAT and Uniform Bar Exam. It can draft facta, contracts, custody settlements and affidavits. It can summarize and contrast case law or write closing arguments. Non-lawyers will get pithy, plain-language descriptions of rights related to wrongful dismissal, human rights discrimination, breaking a rental tenancy or criminal charges.

How transformative the technology is at a professional level was the question put to a panel hosted in March 2023 by the Law Commission of Ontario and the University of Ottawa Centre for Law, Society and Technology 

So far, people's experiences tell us that ChatGPT is primed to supplement or support the work of lawyers – by quickly producing first drafts, summarizing lengthy meeting notes, or pulling together key research.

But it cannot supplant human thinking, expression and judgment in whole or part. ChatGPT has a lot of blind spots that fall short of a professional standard of practice. 

For starters, ChatGPT may correctly suggest a possible set of generally applicable legal rights or protections, but the generic or vague response is incapable of tailoring advice to interpretive facts and issues.

There is no legal framing, and ChatGPT responds to the question it is asked. It is up to the user to know roughly what their legal issue is before they ask about it. This can also result in tunnel vision that ignores better options.

Even where specified, ChatGPT is unlikely to account for jurisdictional nuances a lawyer would recognize or ask about creating jurisdictional blind spots. In a workplace discrimination case, for instance, ChatGPT may not distinguish that a bank employee is regulated under federal versus provincial law unless prompted (by a human) to do so.

Another issue is that asking ChatGPT to draft a contract will likely result in a rudimentary MadLib-style contract ignorant to specific phrasing, case law, or jurisdictional specificity.

Also, and significantly problematic, are "hallucinations." ChatGPT has an acknowledged problem of fabricating information to fit the desired response. The developers of ChatGPT have advanced more sophisticated training to limit such "hallucinations," and they claim the software's most recent (and paid) versions are more reliably factual. But as one law professor recently demonstrated, hallucinations can poison the entirety of a response provided by the free and publicly available ChatGPT tool.

For now, ChatGPT has been trained on data only as recent as the end of 2021. This can leave out important contextual facts (like the invasion of Ukraine and Finland's admission to NATO!) and timely legal developments. ChatGPT also has a future problem. It struggles to extrapolate and anticipate future consequences and conditions.

Lastly, ChatGPT is oblivious to the role of advocacy in resolving disputes. Questions that remain uniquely human relate to the effective use of legal and litigation processes; strategy and tactics informed by resources, social, political and other concerns; and approaches tailored to best satisfy the desired outcome of the client.

An access to justice solution

Where lawyers are not readily accessible, ChatGPT will likely be taken up with enthusiasm, perhaps fulfilling long-standing gaps in access to justice for millions of Canadians who cannot afford, or otherwise access, legal advice for everyday (but no less important) legal problems.

ChatGPT offers 24/7 access to a helpful legal ear and information that appears sophisticated and credible. There is no question that many Canadians will be asking it for advice about employment discrimination, family law, refugee and immigration law, landlord and tenant disputes, and any other everyday legal problem.

It can educate the public and identify and affirm people's basic rights and protections, acting as a triage tool while making the law more relevant and accessible. It can support unrepresented litigants in preparing documents for a court, tribunal, or administrative process (though significant errors appear inevitable). Courts, tribunals and administrative bodies would be wise to prepare procedures, rules and policies to govern how AI-generated texts, arguments and evidence will be assessed for reliability, admissibility, and procedural fairness.

ChatGPT may even encourage settlement among unrepresented parties by offering a mutually acceptable neutral opinion on the law or dispute.

But because there's no curation to the information, non-lawyers will not know what is reliable, correct, or even if a response is playing in the right legal ballpark. Laypersons also won't see obvious gaps in ChatGPT's analysis the way lawyers will. This could distort public understanding of the law and lead to dissatisfaction with legislation.

Fairness between the haves and have-nots

We must also be mindful of inequitable access to sophisticated and efficient legal tools raises a serious concern with procedural fairness. ChatGPT is a general-purpose tool made freely available to the public.However, new AI-powered private legal products are already being developed and differentiated (such as Harvey). As private tools, they may come to be restricted to the profession or priced out of reach for many Canadians.

User privacy and confidentiality are also of concern. There are no guarantees that information inputted into ChatGPT remains confidential or restricted from other uses. Internet giants like Amazon have forbidden developer employees from debugging proprietary software code with ChatGPT, while the Italian data protection agency has suspended ChatGPT's collection of information pending review under the European Union's General Data Protection Regulation. Meanwhile, the federal Privacy Commissioner has announced an investigation into ChatGPT's data harvesting and use in Canada.

What's next?

Amid many uncertainties, one thing is clear: ChatGPT is moving much faster than regulators and there are few (if any) limits on the development and release of AI products.

Over 1,100 luminaries signed an open letter calling for a moratorium on developing more powerful generative AI models so regulators can catch up. The Law Commission of Ontario similarly proposes an array of proactive reforms so both civil and criminal justice institutions aren't left playing catch-up or expected to solve the systemic challenges posed by AI on a case-by-case basis.

Canada recently introduced legislation – the Artificial Intelligence and Data Act – regulating "high-impact" systems. But lawmakers have yet to define what "high-impacts" are. Also unknown is how this might interpret access to information akin to legal advice, or the unauthorized practice of law.