Could Canada lead the world in AI regulation?
With the U.S. moving towards de-regulation, creating a comprehensive AI policy here that encompasses privacy, innovation, and cybersecurity would give businesses much-needed certainty

As upending is the name of the game with the current U.S. administration, in January, President Donald Trump issued an executive order to chart a new course with artificial intelligence — away from safe, secure, and trustworthy development and use.
The order, “Removing Barriers to American Leadership in Artificial Intelligence,” revoked former President Joe Biden’s 2023 executive order, which called for federal agencies to review their use of AI and ordered the U.S. National Institute of Standards and Technology (NIST) to create AI safety guidelines.
In a time already rife with uncertainty, Trump’s order creates new changes in AI development, as it says existing AI policies are hindering innovation. It’s meant to “sustain and enhance America’s AI dominance” and mandates creating a national AI action plan within 180 days. David Sacks, a former PayPal executive and the U.S. special advisor for AI and Crypto, will be part of the team drafting that plan.
The order is short on specifics, so how this will play out remains to be seen. What is clear, however, is the growing trend towards de-regulation in AI policy and the need for Canada to respond.
“Trump's executive order marks a return to public-private partnerships,” says Ana Brandusescu, a PhD student at McGill University specializing in AI governance and a Balsillie scholar at the Balsillie School of International Affairs.
“It establishes a policy of less government oversight of AI, which is problematic because of the profit-driven agendas of AI companies and their continuous rise in power and capital.”
It also removes important elements Biden has incorporated, including protecting people from employment and housing discrimination caused by AI bias, promoting equal opportunity and anti-discrimination legislation, and looking at other ways to protect consumers from AI bias. In contrast, Trump’s order mentions the need to develop AI free of “ideological bias or engineered social agendas.”
“This signals the end of government responsibility and accountability for biases like race and gender,” she says.
The ramifications are already evident. Brandusescu says that the U.S. Equal Employment Opportunity Commission has removed documents related to AI and federal anti-discrimination laws from its website.
“During this time, it’s important to organize and create new unions that include specific protections against AI biases and harms in a worker’s job,” she says.
In the past few years, Canada and the EU have taken the opposite approach when striving to create new AI policies. Canada began working on its framework a few years ago, culminating with the proposed Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, to update consumer protection and privacy laws. AIDA has been criticized for not being clear on what type of AI technology the legislation would apply to and for the lack of public engagement in creating it.
Meanwhile, the European Union continues to lead in tech regulation. Last year, the bloc passed the Artificial Intelligence Act, which subjects high-risk AI systems to more regulations. These systems include those related to public infrastructure (i.e. water and electricity systems), education, employment, law enforcement, democratic processes (i.e. voting practices), insurance and banking. The legislation requires AI providers to have a risk management system, technical documentation of how the system works, human oversight, information about the training data used and a quality management system.
In some ways, Trump's executive order represents a growing trend in national security and tech regulation, says Renee Sieber, who has a background in geography and computational studies. She’s been studying public engagement in AI policy through McGill University’s AI for the Rest of Us project and sees countries shifting away from regulation.
In February, she attended the AI Action Summit in Paris, the first major international artificial intelligence conference since the Trump administration announced its shift.
“There was an unmitigated fear at the summit that if we have tech regulation, we’ll fall behind,” says Sieber, an associate professor of geography at McGill.
“Countries are more concerned with how technology affects national sovereignty and conflict at a very detailed level.”
They are focused on developing a domestic workforce with the skills to create AI systems and using this power to influence other countries. This is to combat other jurisdictions from using their AI dominance to spy on users. For example, the U.S. and Canadian governments are concerned China’s DeepSeek, a free generative AI chatbot, could be used to steal sensitive data about their citizens.
The issue is complex because most AI companies are multinational —they scrape data from various jurisdictions and outsource the training of these systems to workers in different countries.
Sieber says the key to protecting Canadian data is intellectual property regulatory reform and reviewing Canada’s dependence on American software such as Amazon’s AWS and Microsoft’s Azure.
“IP protection has been a longstanding problem for Canada, and it is time for government to move beyond self-regulation,” she says.
“We need to protect our data through legislation and focus on supporting Canadian technology.”
The goal should be for Canada to have sovereignty over its data and how it’s stored and used. Just as the federal government decided Canada needed its own capacity to manufacture COVID-19 vaccines, Sieber believes protecting our data must be a priority.
“Countries need sovereignty over tech and data, which are national assets,” she says.
“We need to control the transfer of data and which data a system has been trained on.”
Trump’s AI order also signals the end of the development of U.S. federal legislation. That means individual states will have to carry the burden. Nearly 700 bills were introduced in 2024 in American legislatures, aimed at everything from AI in employment law to requiring AI developers to document how they prevent bias in their algorithms. For Canadian businesses, that means navigating a patchwork network of regulations, and small and medium-sized companies don’t have the resources to keep up.
David Krebs, a partner and national leader for privacy, data governance, and cybersecurity at Miller Thomson, says the situation is similar to American privacy legislation. While several U.S. states have consumer privacy laws related to data protection, no comprehensive federal law exists. The lack of certainty in regulation means businesses will be cautious about expanding to new markets.
“We’ve already seen massive fines against big tech companies in the EU,” he says.
“A security breach could sink a small company. For big companies, they’re already ahead of the regulations and can move on.”
This could be the time for Canada to become a world leader in AI regulation. A comprehensive AI policy that covers privacy, innovation, and cybersecurity would give businesses much-needed certainty in a rapidly changing environment.
“Some companies saw (the EU’s General Data Protection Regulation) as an overburden and a business killer,” Krebs says.
“If we could get AI regulation, that could help build consumer trust and increase adoption. It would help companies that are overly cautious about regulatory compliance.”
Law firms do have some certainty in terms of regulation. Law societies and the courts have stepped in to regulate how lawyers should use generative AI. The problem is there is little regulation on how AI systems are developed. The issue of systemic bias in training data and lack of public engagement in policy development are major barriers to creating systems that the public can trust. With the U.S. federal government moving away from regulation and Canadian AI regulation still up in the air, law firms must decide the right approach for using the technology.
“Bad regulation is a problem for the advancement of tech,” says Krebs.
“When we use AI in legal tech, we need to be very confident in what we’re using. Voluntary codes are not enough.”