Canadian Anti-Hate Network
November 2, 2020
Source: Robbie Palmer/Unsplash
The Public Policy Forum is going to be making recommendations to the government on online hate as part of its Canadian Commission on Democratic Expression. They have been inviting Facebook, human rights advocates, lawyers, and free expression advocates to share their opinions.
They asked us to make a submission, and this is what we sent them.
---
The Canadian Anti-Hate Network is a nonprofit watchdog organization that monitors, reports on, and counters hate group activity. Our work has identified members of neo-Nazi terrorist groups in the armed forces, exposed the true beliefs and aims of incognito hate groups, and shut many of them down. With over five years of in-depth, daily monitoring of online hate and hate movements, we are sharing our perspective and solutions from the front-line of the issue.
Executive summary:
Online hate is a significant problem that undermines our social and democratic norms and results in harassment, violence, and death. We need to hold the worst of the worst hatemongers and the platforms that make them a threat accountable to our laws with enforcement mechanisms that are available to the public. We must bring back s. 13 of the Canadian Human Rights Act (CHRA), and impose strict enough financial penalties on the platforms that they adopt responsible behaviours.
What we propose is not against free expression; in fact, it protects the expression of the most marginalized who would otherwise not enjoy the ability to engage in political and social discourse -- it would protect the less privileged and allow them full agency in online spaces. We must stop prioritizing the free expression of hatemongers over decent people.
Background:
There are more members and supporters of hate movements in Canada today than there have been in a generation. In 1990, if neo-Nazis gave out two pamphlets standing on a street corner, it was a good day for them. In 2020, their Canadian-made propaganda is viewed many times more often than a Globe and Mail article.
Many researchers have demonstrated links between online rhetoric and in-person hate incidents. For example, an Angus Reid poll in June states that nearly one in three Chinese persons in Canada reported being physically attacked since the start of COVID-19, which coincided with a significant increase in anti-Chinese sentiment online.
Muslim Canadians are perhaps the most targeted by threats and hate speech over the past five years, but targets shift with the media cycle. During the Wet'suwet'en demonstrations, members and supporters of hate groups talked about murdering Indigenous persons and solidarity demonstrators with their cars. When BLM was at the forefront of the media cycle, their rhetoric changed to anti-Black racism and justifying the police killings of Black persons. And, of course, the neo-Nazis blame nearly everything from COVID-19 to BLM on Jewish people, sometimes coded with terms like “social Marxism.”
Concurrently, anti-women and anti-LGBTQ+ (and especially anti-trans), and ablest language is pervasive even while their focus is on different hot-button issues. In short, there isn’t a group protected under Canada’s laws that isn’t victimized.
The rise of conspiracy theories such as QAnon have introduced an entire demographic to archaic antisemitic ideas stemming from the turn of the 20th century, prompting individuals who would otherwise not engage in blatant hatred to propagate and push wild theories about blood libel and worldwide governments run by people of the Jewish faith. All of this is largely due to the willful inaction of social media publishing companies.
Under our pre-existing laws, both the individual and the platform are responsible for unlawful hate speech, but we have no adequate mechanism for enforcement. It is a very important norm-setting exercise in our democracy to hold the worst of the worst individuals accountable, and we’ll explain how to do this below. Ultimately, however, hate propagandists would have no reach without the complicity of the social media platforms and the encouragement of their algorithms.
Requiring that large social media publishers remove targeted, malicious harassment, statements that incite or glorify violence towards protected groups, and obvious examples of hate speech is not at all contrary to our Charter right to freedom of expression. It does, however, change who has the opportunity to benefit from that right.
If we do not take action to remove those obvious forms of discrimination, people who face it every day on social media have to wade through a poisoned pool in order to participate in social and political discussions online. This high cost of participation means many never enter, or may withdraw, from social and political rhetoric. Those who participate pay a higher emotional toll than more privileged Canadians. In short, by allowing hateful speech we are giving preference to the free expression rights of racists, misogynists, and their ilk, rather than decent people in Canada who are excluded, harassed, threatened, and even killed because of their gender, the colour of their skin, who they love, or how they pray.
It also means many from marginalized communities would not even consider becoming involved in the political process of running for office and upholding our democratic institutions.
As one of the only organizations attempting to use existing legal, criminal, and regulatory provisions to hold hatemongers accountable, let us be clear that they are practically useless.
How we hold individuals responsible:
Our highest courts have already:
-
Defined hate in R. v. Keegstra, [1990] 3 SCR 697 and endorsed the “hallmarks of hate,” an 11-point guide to determining if a communication constitutes unlawful hate speech, in Saskatchewan (Human Rights Commission) v. Whatcott 2013 SCC 11, [2013] 1 SCR 467
-
Determined in R. v. Keegstra/Andrews that Charter interests (expression vs. freedom from discrimination) must be balanced and that hate speech is to be considered low value expression in this balancing
-
Repeatedly upheld our hate speech laws
Despite this, individuals are rarely held accountable. S. 13 of the CHRA allowed members of the public to file complaints about hate propaganda targeting a community. The Human Rights Tribunal would hear credible complaints and the panel of judges could order a cease and desist with the power of a court order. This was relatively fast, and effective in addressing the worst of the worst hatemongers in its time. Unfortunately, s. 13 was removed from the CHRA despite being found constitutional in Lemire v. Canada (Human Rights Commission), 2014 FCA 18. Without it, members of the public have little recourse – Canada has no group defamation law.
An individual may go to the police to ask them to pursue unlawful hate speech under S 319 (2) – the wilful promotion of hate propaganda. Unfortunately, law enforcement has demonstrated an extreme reluctance to move forward with S 319 (2) investigations or charges. Couple this with the fact that marginalized communities in Canada - be it racialized individuals, women, or members of the LGBTQ+ feel unsafe engaging the police for protection and intervention, and when they do come forward their concerns are often minimized or outright dismissed.
In short, members of the public have little to no power to defend themselves or their community from what has been well-defined in Canadian law as unlawful hate speech.
This is a policy gap that must be addressed by reasserting s. 13 of the Canadian Human Rights Act, in addition to providing the CHRC/CHRT with additional resources earmarked for processing s. 13 complaints in a timely manner.
How we hold platforms accountable:
Social media companies have demonstrated time and time again that they cannot be trusted to self regulate. Consider former Facebook data scientist Sophie Zhang’s account of how the company wilfully ignored massive, falsified influence campaigns or how they only just banned Holocaust denial material. While the platforms will say they don’t allow hate speech, remember that in 2019 Facebook declared they were removing white supremacists from their platform. Some large names were removed -- and they got the positive press they wanted. However, we can tell you from the frontlines that the white supremacist community itself remained largely untouched. Groups, pages, and individual accounts flourished and continue to operate relatively unchecked.
In 2017 Germany passed its NetzDG law, requiring large platforms to review reports and remove illegal content within a deadline. The problem is that they allowed Facebook to self-report on this process. Germany had to fine Facebook for underreporting complaints of illegal activity. In short, they demonstrated (again) that they cannot be trusted with any involvement of the enforcement mechanism.
We propose the following principles in regulating large platforms to reduce online hate:
- A good solution today, not a more-perfect solution in two years.
- A focus on targeted harassment, threats, incitement towards or celebrations of violence against protected groups, and obvious examples of racism.
- Platforms cannot self-enforce or self-regulate.
- Penalties significant enough to encourage platforms to adopt responsible policies.
With those goals in mind, we are putting forward one possible solution:
- An independent oversight body sets minimum reporting function standards for hate-motivated content that all large platforms must implement, with expert input.
- All reports of hate violations are sent both to the platform and to the oversight body, which can audit their compliance, and make final rulings on whether the content violates standards, in line with our Supreme Court endorsed “hallmarks of hate” guidelines.
- If the platform does not remove hate content within a reasonable time period, they face significant and escalating fines which, in turn, provide continuing funding for the independent oversight body.
We look forward to having a fulsome discussion with the Canadian Commission on Democratic Expression at a future date.
Editor’s note: Canadian Anti-Hate Network board member Amira Elghawaby is a member of the commission.