Make digital spaces safe
We need a systemic response to cybermisogyny and cyberracism now more than ever.
Stay-at-home restrictions during the COVID-19 pandemic have led to the proliferation of online meetings. Sadly, this has resulted in new modes of cybermisogyny and cyberracism, where hackers enter virtual meeting rooms and abruptly and violently disrupt the meeting spaces with racial and misogynistic slurs and pornographic images.
There have been reports of hackers staging cyberattacks in Zoom meetings targeting Black students and scholars. One recent Zoom-bombing incident took place at a California-based Black scholar’s doctoral defence. During his presentation, racial slurs and pornographic images filled the Zoom conferencing screen. Canadians have experience similar intrusions. I witnessed myself hackers infiltrate a recent virtual town hall on gendered impacts of COVID-19 hosted by the YWCA. They blasted racial slurs, misogynistic insults and sexual threats to the organizers in the public chat.
We need to call out these cyberattacks for what they are: a proliferation of online hate rooted in misogyny, racism, homophobia, transphobia and other oppressive forces. The term “Zoom-bombing” minimizes both the nature of the attacks and the level of harm caused. They are not innocuous pranks. Rather, they reveal an oppressive intent to intimidate and silence vulnerable communities who are already targets of online hate.
LEAF’s research project on cybermisogyny focuses on the gendered nature of online hate and image-based abuse on women. A recent study by Amnesty International studying violence and abuse against women on Twitter found that an abusive tweet against a woman is sent every 30 seconds. Black and racialized women were more 34 per cent more likely to be mentioned in abusive or problematic tweets than white women. The sexist and racist language used in these attacks in Zoom meetings follows a similar trend to other trolling behaviour that targets people who speak out against oppression and inequality.
Freedom from violence and hate is a prerequisite condition for exercising our constitutionally guaranteed equality rights. As we become increasingly reliant on digital spaces to connect with others during this time of physical distancing, we must take steps to ensure our digital spaces also remain free from violence and hate. Nowadays, when remote working and education are a reality for many, digital spaces have become the only space where people can connect with others in order to maintain their personal and professional relationships. With schools closed down, students – many of them children – are using digital spaces more than ever.
Some have responded to the growing number of hackers in virtual meetings by calling on meeting administrators to understand and make better use of heightened security features. Fair enough. However, shifting the onus only onto individual users to avoid online hate risks mirroring the victim-blaming sentiment that unfortunately exists towards gendered crimes like sexual assault. Why do we continue to scrutinize the victims’ behaviour and background, instead of asking why the perpetrators are engaging in disseminating hate speech and harmful images?
We need a systemic and structural response to ensure that our digital space remains free from misogynistic, racist, homophobic and other oppressive conduct and violence, now more than ever.
The organizers of the YWCA town hall did an amazing job of responding to the attacks by quickly instilling security measures, such as establishing a “waiting room” to screen out trolls and disabling the public chat. Now it’s up to the institutions, government and technology corporations to ensure that digital spaces and platforms remain safe and secure for all. They must treat the attacks seriously as a form of online hate.