When there is no simple truth
Governments need to better understand the way misinformation spreads before they can get creative about countering it.
This past April, the Biden White House announced it would tap Nina Jankowicz as the first-ever Executive Director of the Disinformation Governance Board, a unit inside the Department of Homeland Security.
Jankowicz was one of the foremost experts on Russia weaponizing disinformation to meddle in American democracy. According to the Department, she was in an ideal position to "coordinate countering misinformation related to homeland security," according to the Department.
But within a day of the announcement, Jankowicz and the newly-constituted board came under fire.
"The people that the Biden administration thinks are the real threat to America — it's not the drug cartels, it's not foreign threats — it's you, it's the American people," Senator Josh Hawley said in denouncing the board.
Fox broadcaster Tucker Carlson said it was tasked to "monitor domestic speech, conspiracy theories, about the validity and security of elections and COVID-19 vaccines." He went on to say the Biden administration would "suppress" legitimate dialog in America.
Even those without a long history of peddling conspiracy theories ganged up on the new agency. "Presenting anyone from the government as an arbiter of truth in 2022 — much less defining 'disinformation' in a way that more than 40 percent of the population would agree with — seemed doomed from the get-go," wrote Benjamin Hart in the Intelligencer.
Just three weeks after they first announced it, the Biden administration said the Board's mission had been "grossly mischaracterized," but killed it just the same. So ended just one of a long list of ideas floated to short-circuit the scourge of disinformation that has disrupted governments in the West.
In Canada, The Trudeau government launched a series of consultations to inform a law that would tackle the online hate speech seeking to "undermine Canada's social cohesion or democracy." Opposition mounted, however, when it became clear that Ottawa was looking to criminalize some of that speech outright. The consultations have since been rebooted, though there is no timeline for when such a bill may be tabled.
It's one thing that the mounting problems of misinformation, disinformation, and conspiracy theories are now being talked about seriously as a threat to democracy. However, finding the policy tools to fight those interlocking problems has proved woefully ineffective to date. Virtually every new solution put forward is met with furious anger, sometimes falling victim to the very disinformation the policies seek to contain.
Paul Butcher, a policy analyst for the European Politics and Institutions Programme, summarized this problem in 2019: "While some motives driving the spread of disinformation, just as the monetary incentive posed by advertising methods, are relatively simple to counteract, the demand for anti-establishment messaging created by a crisis of mainstream liberal democracy is a symptom of a much more worrying structural cause," he wrote.
"It also means that any attempt to fight back against the disinformation problem must be very careful not to exacerbate the Eurosceptic and populist views that have enabled it in the first place. Existing measures, such as legislation with a chilling effect or EU bodies with poorly-defined roles, run exactly this risk."
That warning has only grown more acute recently, particularly in Canada. It was only in February that the freedom convoy shut down the machinery of government, riding a wave of conspiracy theories around the pandemic and the dual allegiances of the Trudeau government. Things have only worsened since then, with many of the convoy talking points now forming the basic politics of Conservative Party leader Pierre Poilievre. The situation has deteriorated even faster south of the border.
"The disinformation, and the effects of it, has got so bad that it has reached a point where citizens would start riots and have no faith in the voting system," says Barry Sookman, senior counsel at McCarthy Tétrault, pointing to the U.S. "It's imperiled the legal transfer of power, which is an essential element to democracy."
Sookman, who has long specialized in areas where the internet and law meet, says it's high time politicians and lawyers start thinking "creatively" about how to fix this problem.
"No plan is worse than a plan, and waiting for a solution is worse than an imperfect solution," he told CBA National.
Understanding the bigger picture
Whenever any government approaches issues of negative information — particularly online hate speech and disinformation — the first impulse is to make the problematic speech illegal.
German law, for example, bans Nazi iconography and prohibits Holocaust denialism. The Trudeau government is likely taking its cues from Berlin as it tinkers with its anticipated online harms law. France, for its part, has adopted a law banning misinformation in the context of an election. Corporations, bowing to pressure from the State, have self-imposed rubrics for taking down false, misleading, and violent content.
Those efforts haven't proven to be all that successful. Germany, facing a rise in right-wing extremism, has introduced even stricter measures to hold companies liable for extremist content on their platforms. Critics have chided them as carrying "disproportionate risks to free expression and privacy," according to the Center for Democracy & Technology. There's no evidence that France was any less exposed to misinformation around the coronavirus than the rest of the world.
In taking steps to restrict the spread of misinformation, Google, Facebook, and Twitter have appeased virtually nobody. While they continue to be rife with misinformation about the COVID-19 vaccines, Twitter and Facebook have been hammered for suppressing news stories, like the leaking of Hunter Biden's laptop, that critics say are legitimate topics of conversation.
Part of the problem is that governments are operating in the dark, says Eve Gaumond, a masters student at Laval University and an affiliate at its Observatory on the Societal Impact of AI and Digital Technologies.
"For the moment, I think that we don't completely understand the whole picture," she says, adding that there is a lack of diagnosis on the origins, motivations, and finances of those who spread misinformation and disinformation. "And we really need more transparency to understand it better."
Without a better sense of how and where to intervene, governments and corporations tend to get heavy-handed. The New Zealand government blocked online message boards 4chan and 8chan after their users praised, and perhaps inspired, a mass shooting in Christchurch in 2019. The ban was dropped shortly after, only to be reinstated after a copycat shooting later that year. Those sites, however, are still accessible for New Zealanders with the help of readily available VPN technology.
The policy options available to Canada are more limited than many of its European allies. When Ottawa added language into the Canada Elections Act in 2018 to forbid the deliberate sharing of misinformation in the context of an election — similar to France's law — it was quickly challenged. The Ontario Court of Appeal found the section unconstitutional under section 2(b) of the Charter of Rights and Freedoms, striking down the provision.
The court left the door open to a revised version of the law that more narrowly recognized the deliberate sharing of misinformation. Still, it showcased just how tricky such regulation can be.
"I'm not a huge fan," Gaumond says of that type of statutory prohibition. "I would prefer that we regulate stuff outside of expression by itself."
While people have made a tremendous amount of hay about the impact of social media companies, there is a growing consensus that the problem is much bigger than a few platforms.
For starters, there is a complex network of social media platforms — some based in jurisdictions that are unresponsive to Western regulation, including Russia — peddling misinformation. Media companies, both traditional and online, increasingly traffic in conspiracy theories and misinformation.
"It's certainly not a social media problem as much as we'd like to think it is," Gaumond says. "It's a trust crisis problem."
At the same time, it's undeniable that the current crisis of faith in our systems is quite novel.
"I don't think that analogies to traditional media are necessarily apt," Sookman says. "What you have are quantum differences in terms of the harms that are being caused."
Liable for what is said
While the traditional policy solutions may be, to varying degrees, too gentle to tackle the scale of the problem or too ham-fisted, there are other strategies to consider.
"Instead of preventing people from actually saying stuff, I think we should [better] regulate campaign finances," Gaumond says. Giving people more information about where certain narratives are coming from, and unmasking groups that have spread this type of misinformation, could be effective.
After being identified as an instrumental tool in the Russian government's attempt to meddle in the U.S. 2016 Presidential election, Facebook began sharing just how much money is spent on various ads. However, the platform does not disclose who owns the Facebook pages — some of which have been instrumental in whipping up conspiratorial fervor in Canada and abroad.
While trying to criminalize certain forms of speech risks further weakening trust in government, upping the transparency and accountability on those who spearhead these campaigns could help reveal the man behind the curtain, as it were.
"You can, somewhat, fight misinformation with information," Sookman says. Governments have not gone on the offensive for whatever reason.
There are other options on the table. In a Texas courtroom earlier this summer, the families who lost their children in the Sandy Hook mass shooting were shockingly successful in holding conspiracy broadcaster Alex Jones to account for his baseless conspiracy theories about the attack. On his network Infowars, Jones alleged that the government concocted the attack and that the children who lost their lives were actors.
The families banded together, suing Jones for defamation. One trial alone saw Jones slapped with $50 million in penalties and damages.
"If you look at the cause of action against Alex Jones," Sookman says. "It really isn't a defamation case."
Indeed, the families who sued Jones alleged a campaign of harassment and an invasion of privacy, in addition to defamation. Taken together, it was a lawsuit alleging the harms brought on by these conspiracy theories.
"To me, the law of defamation is not suited to deal with an inaccurate portrayal with inaccurate facts or argument," Sookman adds. But that doesn't mean there can't be a civil remedy specifically for this kind of extreme disinformation.
"If I had to give you an analogy to where we are: There was a time when privacy interests were generally not protected under the law," he says. "You had to try and fit privacy causes of action into other causes of action torts."
Today, courts in Canada are slowly recognizing invasion of privacy as a tort.
Figuring out how such a tort should be designed is tricky. "Modern torts are generally formulated as a cause of action that an individual can bring, where an individual is being harmed," Sookman says. "Whereas what you have here is harms that are occurring very broadly across members of the public that are being deceived or misled."
The Sandy Hook case is perhaps the perfect example of how that society-wide misinformation could be connected to specific, individualized harms.
Companies who serve — and in some cases, promote — this kind of misinformation could be captured by this hypothetical tort, he says.
"It's kind of like the causes of action against the cigarette companies," Sookman says.
But going too far into outsourcing the solution to private actors carries risks, Gaumond says. "I don't think that we should ask the platforms to become arbiters of truth," she says. "That's not their job." Indeed, the misinformation problem is increasingly moving away from platforms that have taken some measures, albeit insufficient ones, to combat misinformation to platforms that have rejected any kind of content moderation as a point of principle.
Gaumond concedes, however, that creating corporate liability for the edge cases — promoting harmful speech, for example — might be a necessary part of this overall fix.
A coalition of the willing
These roundabout strategies could effectively introduce real reporting requirements and accountability for the peddlers of misinformation. As Sookman says, each policy tool taken separately will not be a "silver bullet." Instead, "it's going to be a series of things that will be mutually reinforcing."
That will likely have to involve different countries addressing the problem.
The European Union has shown how a unified front can enact change, even beyond its borders. The General Data Protection Regulation (GDPR) led to a fundamental shift on the internet, a standard whereby companies let users choose which information to share as they browse the internet. Companies have opted to universalize their compliance measures, rather than offer different services depending on the jurisdiction.
While it is unlikely that Canada could force such a change on its own, a coalition of countries may have better luck.
Gaumond says the world should be skeptical of making like the EU and "trying to push its own agenda on every other nation," particularly when it comes to matters of free speech — where national perception of what is allowable speech shifts so drastically from one country to the next. At the same time, she says, figuring out consensus in a broad coalition — including the EU, Australia, New Zealand, others — could reach a consensus that doesn't lead to Europe, for example, incidentally imposing its preferred strategy on Canada.
Sookman takes it a step further. He says an international accord could put real weight behind the fight against this disinformation pandemic — and could also help set the rules of the road for what constitutes disinformation and when it rises to the need for government intervention.
He phrases it as a hypothetical "human right to be free of misinformation or disinformation."
One way or another, this hodgepodge of fixes needs to come sooner rather than later.
"There's a general recognition that misinformation, particularly on a wide scale, ends up with a significant portion of the population believing that something that is untrue, and basing decisions based on something that is untrue," Sookman says. That has a "caustic, decaying, effect on democracy, democratic values."