We expect the uninformed pundit class to stomp their feet over the upcoming anti-hate legislation. So here's a guide to fact-checking them.
Debunking Bad Arguments Against Online Hate Legislation
The Canadian Anti-Hate Network
Source: @robbiepalmer/Unsplash
The government is about to put forward legislation against online harms, including promoting terrorism, child sexual exploitation, non-consensual sharing of sexual images, incitement to violence, and online hate. We were consulted on the online hate portion of this legislation, and made recommendations alongside 30+ other partner organizations.
The Canadian Race Relations Foundation conducted their own independent poll with Abacus Data and found overwhelming support for online anti-hate legislation (80%!), including a majority of “those who self-identify on the right of the political spectrum.”
We have heard all the well-intentioned and bad faith arguments against online hate legislation in the past couple years. While some come as a knee-jerk reaction from civil libertarians, other criticisms come from bad punditry and the perpetrators of hate speech themselves.
With this guide, you can shut all their bad arguments down.
TAKE ACTION: Bookmark this article. Any time you see one of these bad arguments, post this article in the comments or replies and tag us @antihateca.
- They say: Requiring social media companies to remove hateful content is an attack on free speech and free expression
Fact-check: No, it’s actually the best thing we can do for free expression.
People used to believe that unfettered free speech was good for society because they believed the best argument would always win in the marketplace of ideas. We now know that arguments made by people with money, power, or an army of abusive followers, take up more space than their arguments might deserve, and marginalized people get silenced, threatened, and edged out of discourse.
Online hate attacks free expression. Absolute free speech undermines free expression. It’s paradoxical, but it’s a fact. Online hate silences women, people of colour, LGBTQ+ persons, and other equity seeking groups. Their free speech is more important than hate speech, harassment, and threats coming from racists and trolls.
Right now, equity seeking groups face harassment and threats just for participating in our society online, especially if they’re talking journalism or politics. Our current system, where we allow social media companies to self-police, demonstrably benefits racists, harassers, and trolls and allows them to silence or harm women, people of colour, LGBTQ+ persons, and many others.
“It’s very much about control, like most violence, and about who gets to have a voice,” says Montreal journalist Emilie Nicolas. “I think the goal of the way women and people of colour are targeted by online hate is really to make us shut up.”
Unfortunately, sometimes it works. Journalist Supriya Dwivedi quit her job with Global News Radio 640 because of this kind of harassment, including rape threats against her and her baby daughter.
Diane Abbott, the first Black woman elected to UK parliament, “received almost half of all the abusive tweets sent to female MPs,” according to a study by Amnesty International. Other women decided not to run because of the abuse they receive.
These are not isolated examples.
Online hate creates a chilling effect – seeing women, BIPOC, LGBTQ+ persons, and others get attacked makes members of those groups second-guess whether they’re going to enter journalism, politics, or speak up online.
The Supreme Court has ruled in R. v. Keegstra/Andrews that hate speech is low value speech and, when weighed against other Charter values like being free from discrimination, should lose.
- They say: Online hate isn’t a big deal.
Fact-check: According to the Abacus poll, which surveyed people across the political spectrum, 93% of Canadians feel online hate is a problem, and 49% consider it a major problem. A further 74% are concerned about the rise of right-wing extremism and terrorism.
- They say: Social media companies already have moderation. They say they remove hate speech in their policies.
Fact-check: Just read this damning New Yorker article: Why Facebook Can’t Fix Itself.
- They say: This is “cancel culture!” You are trying to silence Conservative voices!”
Fact-check: There’s an 11-point guideline called the Hallmarks of Hate that have been endorsed by the Supreme Court that are used to determine what is hate propaganda. The proposed legislation would rely on what’s already been established in law -- and it isn’t political.
Further, a majority of “those who self-identify on the right of the political spectrum,” according to an Abacus poll, support legislation against online hate.
- They say: S. 13 of the Canadian Human Rights Act was controversial. It was against free speech. It shouldn’t come back.
Fact-check: S. 13 of the CHRA allowed individuals to make a complaint about hate speech that was likely to encourage harm towards a community. A tribunal would hear valid complaints, and could order a cease and desist and a small fine. This law was effective in shutting down several neo-Nazi websites. Of course, the law itself was challenged by one of those neo-Nazis. The Harper conservatives repealed the law, but a year later the Supreme Court found that the law was, in fact, consistent with our Charter values in Lemire v. Canada (Human Rights Commission), 2014 FCA 18. Thanks to that political decision, we have no tool to hold unrepentant hate propagandists accountable today.
The value of s. 13 is that it allowed communities to access the legal system to defend themselves against hatemongers without law enforcement acting as gatekeepers. At the time, it was also argued that s. 13 was redundant because of s. 319(2) of the criminal code – the wilful promotion of hate.
We don’t expect to see this piece of legislation bring s. 13 back, but there is growing support for its return and we hope to see that addressed in the coming months.
- They say: We don’t need new legislation; we already have a criminal law against hate speech: s. 319(2).
Fact-check: Police are extremely reluctant to pursue s. 319(2) charges. Surveys and interviews of police forces in Ontario by Dr. Barbara Perry suggest this is because the police do not view s. 319(2) or the hate portion of any other hate crime as important, and because of the additional work involved. Kevin Johnston was charged with the wilful promotion of hate towards Muslims in 2017 after, among other things, offering a bounty of $1,000 for any video of Muslim children praying in schools. He has yet to see trial, despite continuing and escalating his racist antics. It’s a good case study in the real effectiveness of s. 319(2).
- They say: The Public Policy Forum’s Canadian Commission on Democratic Expression opposes proactive takedowns of hate content. They say it hurts free expression. They want a “notice and notice system” modeled after online copyright complaints. It means first politely asking harassers to remove their posts (no, really) and when that doesn’t work, the victim can appeal to an independent body. They can then review the content, and instruct social media companies what specific posts to remove.
Fact-check: This is incredibly out of touch with reality. Harassers aren’t going to remove their abuse when asked. Then there’s the issue of volume. There’s too much racism, harassment, and death threats to deal with one by one. Having a human panel look at every complaint is impossible. We need the companies to leverage their algorithms to remove hate speech. The larger companies already do an excellent job keeping terrorist content and child pornography off their sites, and according to internal reports they’re capable of doing the same thing with white supremacist content.
Further, requiring victims to go through a bureaucratic process when they’ve been attacked by racist, anti-women, anti-LGBTQ+ or other forms of hate is putting the onus on the wrong person. The platforms should have a duty of care not to expose people to harassment, threats, and danger in the first place.
The independent regulator should hear meaningful, precedent setting appeals from both people who feel their free expression rights have been infringed upon and those complaining that social media companies aren’t removing significant kinds of hate speech. This mechanism can provide ongoing instructions to the social media companies to keep them up to date.
There are a couple good ideas in the commission’s recommendations, however, like imposing a duty of care on social platforms.
- They say: It’s impossible to create a definition for hate or eliminate hate. It’s a human emotion.
Fact-check: The Supreme Court has endorsed a narrow definition of hate in R. v. Keegstra, [1990] 3 SCR 697.
Hatred is predicated on destruction, and hatred against identifiable groups therefore thrives on insensitivity, bigotry and destruction of both the target group and of the values of our society. Hatred in this sense is a most extreme emotion that belies reason; an emotion that, if exercised against members of an identifiable group, implies that those individuals are to be despised, scorned, denied respect and made subject to ill-treatment on the basis of group affiliation.
Those who argue that s. 319(2) should be struck down submit that it is impossible to define with care and precision a term like "hatred". Yet, as I have stated, the sense in which "hatred" is used in s. 319(2) does not denote a wide range of diverse emotions, but is circumscribed so as to cover only the most intense form of dislike.
- They say: Social media companies will overreact to these regulations and err on the side of playing it safe and avoiding fines by removing all sorts of content that isn’t actually hate speech.
Fact-check: They said the same thing would happen with the laws against online hate speech in Europe, particularly in Germany under NetzDG. There’s no evidence that this happened.
As an additional safeguard for free expression, proponents of online hate legislation such as ourselves have recommended that there is a meaningful appeals process for individuals and organizations who believe their content has been taken down in error.