Skip to Content

Canada sits on the fence about regulating AI

Concerns an AI commissioner won’t be independent enough from industry and government.

Thinking about AI

Disruptive technologies have been overturning legal applecarts since at least the introduction of moveable type. In the case of artificial intelligence, regulators are trying to catch up to a class of technology built to evolve.

The speed with which AI systems are growing in power and sophistication seems to have caught many clever people off-guard. When the federal government introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27 in 2022, its goal was to treat AI as an element of industrial policy — to manage the fears and expectations surrounding AI in order to build a domestic industry.

In less than a year, the global conversation on AI has changed radically. Pioneering AI developer Geoffrey Hinton quit Google, warning that AI chatbots are showing "scary" abilities and threatening to eclipse human talents. His warning touched off a wave of accomplished scientists and engineers making doomy predictions of AI-driven chaos — even human extinction. "If in six months you are not completely freaked the (expletive) out, then I will buy you dinner," Austin Carson, founder of SeedAI, told an audience in Texas two months back. This week, top AI researchers and CEOs warned in a statement that "[m]itigating the risk of extinction from AI should be a global priority." 

If the federal government is freaked out about AI, it's hiding it well. AIDA is an empty bucket of a law. Its stated purpose is to create a framework to regulate "high-impact" AI systems. But the legislation's definition of "high-impact" — along with virtually every other aspect of Canada's plan to keep AI on the rails — is being left to the regulation-writing stage. Ottawa defends this approach as "agile," capable of responding to a fast-moving technological climate.

It also might reflect the Canadian tradition of waiting to see what the other guy does first. The European Union proposed the first package of AI rules in the Western world two years ago, before ChatGPT and its cousins started terrifying engineers. The EU Parliament amended that law early in May to make it far more restrictive.

The amendments ban "intrusive" and "discriminatory" uses of AI such as real-time biometric identification systems in public spaces. They forbid biometrics based on "sensitive" characteristics such as race and gender, along with predictive policing systems. They outlaw indiscriminate biometric data-scraping from social media or CCTV footage to create facial recognition databases — a measure aimed directly at Clearview AI's controversial business model. And they compel those behind the "foundational models" driving generative AI tech like ChatGPT to apply safety checks and mitigate risk before putting their products on the market.

"The EU is being more proscriptive in their proposed AI act," says Chris Ferguson, a partner at Fasken specializing in technology, privacy and cybersecurity law. "Here in Canada, the approach is to let it be more open-ended for now by leaving important concepts to be set by regulations later."

Canada is keeping the legislation vague while the Europeans get granular. The Americans' approach, meanwhile, has been more scattershot than moonshot.

While American developers have decried Europe's move to ban companies from providing remote API access to generative AI models, Washington's response to AI regulation has been to task individual agencies with assessing their own use of AI. According to a recent article by Brent Orrell, a senior fellow at the American Enterprise Institute, almost 90% of those agencies still needed to submit AI plans at the end of 2022.

"Can federal agencies that are unable to account for even their own use of existing AI products establish sensible regulatory frameworks meant to ensure AI safety, privacy and security for the rest of us?" Orrell asked.

In short, Europe has a massive lead on the rest of the developed world in drafting legal boundaries for AI. That gap promises problems down the road.

"AIDA first emerged in June, the committee study hasn't started — that's not until the fall. And that's before regulations get drafted. A year and a half wasted," said Christelle Tessono, a McGill graduate who now works as an emerging scholar at Princeton University's Center for Information Technology Policy.

"And the discussions in this country have been mostly happening behind closed doors because there were no public consultations on AIDA.

"The risk is that this technology is moving so fast, we're going to see a lot of harm here before the infrastructure to regulate is in place."

The lack of specifics in AIDA is intended to give Canada more flexibility, said Ferguson — by allowing AIDA to adapt to new developments or address compatibility with other jurisdictions by revising the regulations. But it could also deter investors and businesses looking for certainty.

"It's a double-edged sword, isn't it? It might make AIDA more adaptable," he said. "But the uncertainty makes it difficult for companies to plan compliance and understand the impact of the proposed law on their activities."

That vagueness is already causing headaches for Canadian companies looking to contract for AI services. Participants in a recent panel discussion hosted by McCarthy Tétrault were warned that they'll have to get detailed explanations of methodology from AI providers to comply with regulations that are still years away.

Meanwhile, Europe's new AI rules are extraterritorial — they apply to those who build and deploy AI products used within the EU, no matter where they are physically based. Because the EU is way out in front of AI regulation and the European market is so vast, many observers are predicting a "Brussels effect" that would see the U.S. attempt to influence the EU regulations before eventually adopting them.

Canada isn't likely to stand on the sidelines as the EU and the U.S. agree on the rules. Tessono said Canada should cut to the chase now and adopt the EU's more proscriptive approach.

It should start, she said, by appointing a truly independent AI commissioner. As currently drafted, AIDA contemplates the appointment of a commissioner who would answer to the industry minister, whose brief is to build the AI industry, not fence it in.

"Concentrate power in the hands of the minister and problems will follow," she said. A well-resourced independent commissioner would be able to respond to emerging problems quickly, she adds — as the Office of the Privacy Commissioner did when it launched its own study of ChatGPT a month ago.

"The OPC did that. Not the minister of innovation, science and industry. That's what you get from an independent commissioner."