Skip to Content

Canada mustn’t rush into legislating AI

The federal government wants to be the first to implement an AI regulatory framework. Critics want it to press the reset button and focus on getting it right.

Aida

How do you regulate a revolutionary technology that evolves from one week to the next? Just craft a regulatory framework that's as agile as the technology itself. At least, that's the approach governments are trying to take to develop rules governing artificial intelligence. But in Canada, the calls for Ottawa to start over and rethink its approach are getting louder.

According to Teresa Scassa, the Canada Research Chair in Information Law and Policy at the University of Ottawa, the federal government is "making law on the fly" and not in a good way. "I have never seen anything like this in my life," she told the CBA Privacy and Access Law Symposium this month. "This is the kind of agility you have when you fall off a cliff, and your body makes this beautiful arc before you hit the ground. It's agile, but it's not my kind of agile."

Scassa was sitting on a panel with Barry Sookman, a partner at McCarthy Tétrault and Ashley Casovan of the Responsible AI Institute. They spent the better part of the session debating whether the proposed Artificial Intelligence and Data Act – or AIDA – should be completely split off from the federal government's broader Bill C-27, known as the Digital Carter Implementation Act, and sent back to the drawing table.

In fairness to the government, regulating AI presents a formidable challenge. The technology is unlike anything we've seen before in terms of its speed of iteration and deployment across all sectors of the economy. It also raises a long list of concerns. There's the potential for privacy breaches and violations, and online harms. We know AI tools are susceptible to algorithmic discrimination. There are also copyright-related challenges, deepfakes, and AI's impact on employment practices to worry about.

Hence the pressure on governments worldwide to act quickly. And in the race for global AI dominance, they need to find the right balance between fostering innovation and ensuring AI safety. The trouble is that predicting the future is hard, so they're trying to build flexibility into their legal frameworks.

But at what cost? In Canada, critics argue that AIDA, introduced as shell legislation in June 2022, lacks a clear roadmap for regulating AI systems. It only vaguely promises to govern high-impact AI systems. The government says it will flesh out the details later in regulation following public consultations, but the bill says nothing of medium- and low-impact systems. That has left stakeholders in the dark about what any of it means and raises questions about AIDA's effectiveness and ethical oversight, said Sookman.

"AIDA is like an algorithmic black box," said Sookman. "Nobody's in a position to really assess what's in or out or how the regulations will apply."

Attempts to clarify the scope of the law, such as Industry Minister Francois-Philippe Champagne's companion document, released earlier this year, categorizing high-impact AI systems, have not dispelled the confusion.

Comparisons with the European Union's (EU) AI Act highlight AIDA's perceived shortcomings. The EU's legislation, expected to become law in early 2024, is far more detailed and classifies AI systems according to a sliding scale of risk. The higher the risk, the stricter the requirements. 

With its proposal, EU authorities hope to leverage its market power and the extraterritorial scope of its legislation to set the tone for the rest of the world, as the General Data Protection Regulation (GDPR) did for much of the world's privacy legislation.

But according to Sookman, it's a stretch to say that AIDA aligns with the EU legislation. In any event, he stresses the need for Canada to align itself with its major trading partner, the United States and, to a lesser extent, the UK.

The latter is on its way to adopting a principles-based model, delegating responsibility to existing regulators. Meanwhile, the White House's executive order last month features eight guiding principles for AI safety. It's then up to federal agencies to address the AI risks within their area of expertise. In contrast, AIDA contemplates the appointment of a commissioner who would answer to the industry minister, a distinction Sookman argued will lead to inefficiency and overlap.

Consider the use of AI in our courts, for instance. Logically, that ought to fall under the jurisdiction of the Justice Minister, said Sookman. Similarly, the regulation of AI in the workplace ought to fall under the remit of the Minister of Employment. Instead, we'll have a "structure that puts all of the authority in the regulation of an absolute key technology that will be pervasive in every product and service under the authority of one ministry." 

It is a widely held view in the legal community. Privacy lawyer David T. Fraser of McInnis Cooper says that Canada needs to put on the brakes. It's too early for the country to build a bespoke AI legislation model, he says, suggesting that it should let larger economies lead the way. 

"I'd like our legislation to be more readily operable with international standards. We have to recognize that Canada is a very small market. It's far-fetched to think that we could be the tail that wags the dog."

Even the constitutional foundations of AIDA are questionable, said Scassa during the session. Never mind that the bill includes a provision excluding the law application to federal institutions covered by the Privacy Act, meaning it can only apply to provincial ones. "AIDA is supported by the international and interprovincial trade power, which is limited and is going to create problems." 

Sookman also agrees that Canada needs to ensure that authority for regulating AI is spread across multiple institutions—both at the federal, provincial, and local levels. 

Among the high-impact categories identified by Minister Champagne are health care and emergency services. This is an area where AI is making significant advances, Scassa noted. "That's not going to be captured by the federal legislation because it is going to be purely provincial." As painful as the prospect may be for Ottawa, it needs to involve the provinces to a far greater degree. 

And therein lies the root of the problem: the lack of extensive public consultation on AIDA before drafting the bill. It stands in stark contrast with far more inclusive approaches pursued by the EU, the US, and the UK. "There are a lot of people who have things to say about this," said Scassa. "So, it's problematic from a democratic perspective."

On the whole, observers are asking why Canada is trying so hard to win the race to have an AI regulatory framework in place, particularly as the government has already announced it will take at least two years of consultations and study after AIDA receives Royal Assent to draw up the regulations.

Last month, several organizations published an open letter to Minister Champagne, calling for AIDA to be split from Bill C-27 on the grounds that it wasn't ready "for committee consideration." The CBA's Privacy and Access Law Section has yet to make its own recommendation on the matter, and is currently working on a submission, says Fraser.

"One could rationally take the position that [AIDA] could be fixed with regulations, which are more nimble [than statutes]," says Fraser, speaking on his behalf, not the Section's. "But I'm afraid we're going to regret this […] There are structural issues with the Act that are really problematic."

For her part, Casovan acknowledges that Bill C-27 is far from perfect and that the definition of high-impact systems needs to be clarified before passage of the bill. "However, I don't think that these are things that should stop the creation or continuation of this bill," she said, adding that the bill already points to a more decentralized form of sectoral regulation.

Still, it's hard to imagine that faith in the bill can be salvaged. "If the bill was scrapped, we could use those two years to do a consultation, to do advanced thinking, to introduce a new bill and pass it," said Sookman. "We're not necessarily losing time. We're just stepping back to get it right."