Skip to Content

Old before its time

The EU’s proposed artificial intelligence regulation takes a 20th-Century approach to regulating 21st-Century technology.

Old Europa

When Europeans do regulation, they don’t mess around. Three years ago, the European Union introduced the General Data Protection Regulation (GDPR) — the most stringent set of rules for online privacy to be found anywhere on the planet.

Last month, the EU rolled out proposed regulations on artificial intelligence systems. Like the GDPR, the AI package is ambitious, extraterritorial (it covers all providers of AI systems in the EU, no matter where in the world they’re based) and includes some whopping great fines for companies that break certain rules — up to 30 million euros or 6% of global annual corporate revenue, whichever is greater.

Because Canada sees itself as an emerging hub for AI technology, the EU’s moves in this area are being watched closely here by both industry and legal experts. How far should Canada go in emulating the European model?

The EU package takes what regulators call a “risk-based” approach to AI. It bans specific AI applications — those that could lead to “physical or psychological harm” through the use of “subliminal” techniques, or that target groups vulnerable to manipulation due to “age, physical or mental disability.” It forbids the use of “real-time” remote biometric ID systems, such as facial recognition, in public spaces. It prohibits using AI systems for “social scoring” systems (looking at you, China).

Its major innovation is the idea of ex-ante “conformity assessments” for “high-risk” systems that must be conducted before an AI product is allowed on the market. Its list of high-risk systems includes remote biometric systems and those that govern things like educational placement and credit scoring.

It’s a start, says Teresa Scassa, Canada Research Chair in Information Law and Policy at the University of Ottawa. “Some applications need to be banned altogether, while others need more stringent regulation,” she says.

“If the risks are high, it makes sense to subject them to close scrutiny before they’re released on the market. We have prior regulatory approval for things like drugs and medical devices because we know bad things can happen if they go wrong.”

But there are some glaring omissions in the EU’s list of no-go technologies, says Kate Robertson, a Toronto-based criminal lawyer who co-authored a study of algorithmic policing for the Munk School’s Citizen Lab last year.

“The European plan is still too permissive. There are too many carve-outs for law enforcement,” she says, referring to the fact that EU regulators would still permit the use of remote biometric identification systems in police investigations of a wide range of offences under European law.

“It doesn’t define neighbourhood predictive policing systems as high-risk, and it should,” she adds, pointing to highly controversial AI systems that are supposed to help police track high-crime areas.

The EU proposal is also frustratingly vague in spots, says Gillian Hadfield, law professor and director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto.

“What do we mean by ‘risk’? There are different levels of risk associated with, say, school admissions versus the use of AI in facial recognition,” she says. “They have to define the risk, and that’s going to be very hard.”

Charles Morgan, national co-leader of McCarthy Tétrault’s Cyber/Data Group, says the Europeans may struggle even to identify the targets for their regulations.

“One problem is in defining who the ‘provider’ of an AI system or service is,” he says. “These days it’s not so simple. Companies that offer end-to-end services are becoming rare, while products based on the assembly of machine learning components from different sources are becoming more common.

“So who qualifies as the ‘provider’ in these situations? The task of assigning liability might be more complicated than anyone thinks.”

Hadfield also thinks the Europeans are taking a far too granular approach to reining in the fast-moving AI field — that what’s needed is not a detailed list of dos and don’ts but general principles that third-party inspectors could enforce.

“Fines, anti-trust suits, regulations — these are 20th-century approaches to 21st-century problems,” she says.

“What you don’t want is a set of detailed standards that won’t be able to keep up with the technology. You want governments to set output-based criteria. Keep the governments’ focus on the goals and let private regulators meet those objectives.

“This is why I don’t like the idea of lists. The technology is changing constantly, and no bureaucracy in the world will be able to keep up with it.”

That constantly-evolving quality of modern AI systems — the fact that they “learn” and change in contact with data — may make it awkward for developers and users to keep up with a European requirement for ex-ante conformity assessments, says Laila Paszti, a partner at Davies in Toronto and a former software engineer who developed machine learning systems for industry.

“The comparison to pharmaceuticals is apt. But pharmaceuticals typically take a very long path to market. AI systems can develop very rapidly and — here’s the key point — the power of an AI system is that it can learn on its own, change, improve,” she says.

“If subsequent improvements in the way a system behaves mean it has to be evaluated again, how does that work? How often does it have to happen? That could significantly hamper the developer’s returns and the benefits of an improved AI system.”

Another core concern of AI developers in Canada, says Paszti, is consistency. Canadian firms looking to sell their products in Europe want some degree of harmony between Canadian and EU regulations. Canada doesn’t have a regulatory framework specifically for AI — but the Ontario government recently launched consultations on a new set of guidelines for the province.

In its recommendations last year for reforms to the Personal Information Protection and Electronic Documents Act (PIPEDA), the Office of the Privacy Commissioner of Canada said the law should require that developers “design AI systems from their conception in a way that protects privacy and human rights" — an approach that aligns with that of Europe. The federal government needs to step up with a regulatory template that can “serve as a standard for the whole country,” says Paszti.

“The more Canada can align its rules with major markets like the EU, the fewer divergent regulatory regimes Canadian companies will have to comply with.

“The federal government should look at what’s happening globally and align our rules with those elsewhere as much as possible. And to the extent that Canada can influence the development of those rules abroad for a more outcomes-based, less prescriptive approach, it should do so.”