Skip to Content

Practising safe AI

The legal profession needs to prepare for new rules, coming from all sides, regarding the use of AI in law.

AI safety concept

As generative AI continues to shape innovation in the legal sector, regulators are grappling with how to provide ethics guidance to lawyers while ensuring members of the public are protected. It's an uncertain environment in which AI companies must anticipate and prepare for a changing regulatory landscape.

While AI promises enhanced productivity across all industries, reports of lawyers submitting AI-generated court documents containing errors or citing fake cases have become a cause for concern that irresponsible use can harm clients.

The first legislative item to watch is the European Union's AI Act, set to play a crucial role in governing the use of AI. At its core is a categorization system whereby AI systems are regulated based on the level of risk they pose to a person's health, safety, and fundamental rights. According to Teresa Scassa, Canada Research Chair in Information Law and Policy at the University of Ottawa, the EU legislation will compel legal tech companies to comply with privacy and security concerns.

"We have growing expectations about safety and ethics," she says. "The EU regulations will make legal tech companies follow these new principles like they did with GDPR. This is going to have an impact on developing AI."

In Canada, the Artificial Intelligence and Data Act (AIDA) is part of a package of laws introduced under Bill C-27, along with the Consumer Privacy Protection Act, which would modernize Canada's private sector privacy laws, and the Personal Information and Data Protection Tribunal Act, which would establish an appeals tribunal for decisions rendered by the Office of the Privacy Commissioner. There have been calls to separate AIDA from Bill C-27 to slow its adoption so that the government may properly consider the legal liabilities and risks that AI presents. 

Meanwhile, regulators are working to give more guidance on what lawyers should be looking for. The Law Society of British Columbia recently released guidelines on professional responsibility and generative AI covering the usual topics of confidentiality, security and competency. Rule 3.2.2 of the BC Code requires lawyers to be honest and candid with clients. The guidelines state, "With this obligation in mind, it is prudent to make your client aware of how you plan to use generative AI tools in your practice, generally, and on their specific file(s)." 

California is at the forefront of AI regulation in the United States. In November, the state bar became one of the first in the country to approve generative AI guidelines. Lawyers are required not to input confidential information into generative AI tools. They must also review terms of use to ensure third parties don't use inputted data to train AI tools. Also outlined is a duty of competency, which includes understanding the biases in AI. Lawyers are expected to supervise the use of generative AI in their firms. Lawyers can charge clients for costs associated with generative AI but also must explain how generative AI is being used. 

The Florida Bar Association is looking at similar guidelines and has issued a Proposed Advisory Opinion in November. Lawyers and the public have until January 2 to comment. It covers the same issues as California, specifically confidentiality, supervision of staff using AI and disclosure to clients. 

There's also ongoing work on regulatory measures at the American Bar Association, which launched an AI task force earlier this year. 

The UK recently released a consultation paper on AI legislation. The Law Society of England and Wales recently submitted recommendations emphasizing confidentiality and human supervision. It recommends embedding AI officers at organizations with roles similar to privacy officers. It proposes greater transparency and ensuring an individual's right to a human appeal against decisions made by AI systems, especially in the "high-risk, high-stakes realm of the justice system."

How tech companies will respond remains to be seen. There's ongoing litigation about copyright and other AI issues. And if there's a lesson to learn from the recent drama that played on OpenAI's board, it's that law firms need to be careful about what legal tech providers to rely on. 

Legal tech expert Jillian Bommarito says the incident is a reminder not to rely too heavily on a single product. OpenAI ran into issues in Italy earlier this year when the government temporarily banned ChatGPT due to privacy concerns.

"We shouldn't be surprised by these incidents," says Bommarito, chief risk officer of 273 Ventures, a legal tech consulting firm. "If AI is part of your supply chain, whether or not it directly effects your security processes, you should be prepared. Law firms should have a procurement process that looks at risks like this." 

Law firms must formulate a strategy to manage risk, diversify product use, and ensure human oversight of AI inputs and outputs. "You want to make strategic decisions," says Bommarito. "In tech, they like to move fast and break things. You don't want to do that. You want to embrace tech and keep within your risk tolerance. Some firms are comfortable taking an aggressive stance because they're using it for the business of law, not necessarily on clients."