Hate speech accountability
Internet service providers already prohibit hate speech. But the rules have to be enforced.
In theory, there is no difference between real-life hatred and the virtual kind. But in practice there is. Online messages can normalize hate propaganda, spread its nefarious influence far and wide and radicalize people in all corners of the globe. It’s time Canada updated its legislative framework to deal with this growing problem that causes real harm to real people.
In his January supplemental mandate letter, Prime Minister Justin Trudeau instructed Heritage Minister Stephen Guilbeault to “take action on combatting hate groups and online hate and harassment, ideologically motivated violent extremism and terrorist organizations.” It’s expected the government will introduce legislation this spring.
In 2013, the Supreme Court of Canada ruled in its Whatcott decision that “hate propaganda opposes the targeted group’s ability to find self-fulfillment by articulating their thoughts and ideas” and places “a serious barrier to their full participation in our democracy.” Hate propaganda does not just marginalize certain groups; it can also force them to argue for their basic humanity as a condition for participation.
“What happens online has consequences in real life,” says Richard Marceau, the general counsel for the Centre for Israel and Jewish Affairs. “We saw it in Christchurch, we saw it in Pittsburgh, in many places.”
Studies show that young people are self-radicalizing online, he adds. “Not just here, everywhere around the world.” The power of social media, a force multiplier across the internet, is a concern. “Just look at what happened in Washington not too long ago. It’s real, it’s powerful and it can influence people and make them act violently.”
A public opinion poll released in January 2021 showed that Canadians from racialized communities are three times more likely to be the target of online hate than non-racialized Canadians. Asked whether they agree social media platforms should be required to remove hateful content promptly, 80% of respondents said yes. Sixty per cent support federal legislation to prevent racist and hateful rhetoric online.
Three CBA sections (Constitutional and Human Rights, Criminal Justice and Sexual Orientation and Gender Identity Community) argued in an October 2020 submission for “civil and criminal legal remedies to combat online hate” balanced against the right to freedom of expression.
Among other measures, the submission reads, the government should introduce an improved version of section 13 of the Canadian Human Rights Act (CHRA), which was repealed in 2014. The trouble with the old version of s. 13, according to human rights lawyer David Matas, is it allowed complainants who “might just have heard something from somebody else that was completely inaccurate” to lodge a formal complaint. Complaints could be made based on rumour.
Navigating issues around freedom of expression can be a tricky dance. “The right to freedom of expression is not an absolute any more than any other right,” Matas says. “So the question is, how to draw the line so that the rights of both those who are concerned about freedom of expression and the right of people who are concerned about freedom from incitement to hatred are equally respected?”
The CBA submission attempts to square that circle by proposing to reenact the substance of section 13 of the CHRA with additional procedural safeguards “so the law does not become a vehicle to harass legitimate expression as the previous section 13 had been,” Matas explains.
In practice, the CBA recommends the Human Rights Commission screen all complaints and dismiss them at an early stage if they do not meet a certain threshold. “There needs to be a gatekeeper for access to the civil law remedy as there is for the criminal law remedy,” the submission reads. The consent of the Attorney General of Canada is necessary for the criminal law remedy. Likewise, access to the civil law remedy should be constrained by the Commission.
The Commission should also have the power to award costs, which would discourage complainants from making frivolous complaints.
Treat internet providers as publishers
Another issue with the old section 13 of the CHRA was that it excluded internet service providers (ISPs) from its scope. A reenacted section 13 should deliberately reverse course, the CBA sections agree. “When an internet provider allows a person to use their services, the provider is communicating what the person posts on the provider’s platform,” reads the submission.
Matas has little time for those who complain about censorship. “It’s kind of missing the point,” he says, given that what the CBA recommends is to add legislative backup and institutional expertise to what’s already contained in the terms of service of ISPs. “No disrespect to the internet providers,” he adds, “but when it comes to this area, they don’t know what they’re doing. They know the technology, but they don’t know hate speech and that’s an expertise that exists in the human rights commissions and we should be using it.”
Marceau would go further and require making ISP board members and executives “personally accountable” for regularly or repeatedly allowing hateful content on their platforms.
Marceau adds that technology companies already have legal obligations to remove content that violates copyrights or breaks the law – such as child pornography, or the unauthorized use of songs or movies. “Companies have the technology to remove content. They could use it to remove online hate,” he says.
Ultimately, it’s about making sure that what’s already prohibited under the ISPs’ terms of service is not just enforced in theory, but in real life too.