Ottawa Occupation Shows Why We Need Anti-Hate Legislation

Every day the government allows social media companies to self-regulate, Canadians are getting misled, enraged, and absorbed into the far-right and Covid conspiracy movement. Now a far-right mob has occupied the capital.

Canadian Anti-Hate Network



Ottawa has now been occupied for a week with no end in sight. They are led by a cadre of organizers and streamers who are connected with Islamophobia, antisemitism, racism, and incitements to violence. Many among them want to see the Prime Minister and public health officials put on trial for treason, or executed. While a majority of participants are not assaulting people in the streets, there are those among them who would directly confront passer-by, physically attack journalists, wave a Nazi flag, or assault a houseless person, hurl racial slurs, and take food from a soup kitchen.

The people supporting this far-right occupation are both victims and perpetrators of misinformation. Most of them find their way to the movement beginning on mainstream social media platforms. The algorithms notice they engage with conspiracy content and far-right content, feeds them more, and suggest groups for them to join. Fellow travellers say the unvaxxinated are being persecuted on the same level as Holocaust victims and that drastic action is necessary. Eventually, they’re angry enough to drive to Ottawa.

It will be difficult, if not impossible, for members of the intertwined antivaxx and far-right movement to come back to reality. New people are finding them every day. With online harms legislation, we may be able to disrupt that pipeline by making it harder for dis/misinformation to find people. We may be able to build a fence of protection both online and offline around the groups that the far-right slanders, harasses, threatens, and attacks. We have to try.


We urge you to contact your MP, send them this article, and tell them it’s time to make the social media platforms be at least a little socially responsible.


The government was proposing a complicated regulator that would try to address several kinds of online harm, like child sexual exploitation. Its thorniest and most controversial issue was that of regulating hate speech. The technical paper envisioned a body that would hear complaints about pieces of hate content and issue rulings on whether each post stays up or comes down.

The government received a lot of critical feedback on its plan. We had our issues with the technical paper too.

We urge the government to look at the convoy outside the window to understand why we cannot put this issue on the back burner. We must push forward with a better and (hopefully) more popular plan.

One in five Canadians are directly affected by online hateharassers use hate speech to silence and scare women, BIPOC, LGBTQ+, First Nations, Métis, and Inuit peoples, and others.

It’s important to note that 80 per cent of Canadians want legislation to curb online hate. However, the average person is not writing a feedback letter to the government about a technical paper. Some of the critical letters are coming from the companies themselves, which should frankly be thrown out – they demonstrably can’t be trusted. But more of that criticism is coming from dogmatic free speech academics and organizations.

These civil liberties advocates aren't trying to do harm, or do nothing, but they seem not to understand that the prevalence of hate speech and how it silences people is the free expression issue of this generation. We would rather see a small amount of posts that are not-quite hate speech be a casualty of any legislation rather than have hate speech continue to attack and silence women, BIPOC, LGBTQ+, First Nations, Métis, and Inuit peoples, and others. It's time for the speech of equity seeking groups to be prioritized over racists, abusers, and neo-Nazis.

The remaining critics are members and groups from communities actually facing hate who are afraid of unwarranted police involvement in the process / with the regulator, and the complaints regime being wielded against them by bad actors. They’re right and we should all be listening.

Additionally, and fundamentally, the plan put forward in the technical paper seems to address the wrong problem.

The problem is not that the complaints mechanisms on Facebook and other platforms are slow and poorly adjudicated. We shouldn’t be handing victims a homework assignment to get hate content taken down long after it’s already done its damage anyway. The problem is that there are people who want to harass, abuse, and incite violence with hate speech, and that platforms have decided to allow it to be posted in the first place, and even amplify it.

Like everyone says, we really need to go after the business model of these companies, which have decided to prioritize engagement at the cost of our democracy and the safety of our neighbours. The platforms have been purposely getting people angry and funnelling them into dangerous echo chambers.

We sat with this for a while, and we think we have a solution.

We’re calling it the ombudsperson approach.

We’re recommending that the government create an ombudsperson/regulator, with broad investigatory powers. They can compel evidence and testimony from the social media companies and take a hard look at their algorithms and business practices. They can also issue recommendations.

Facebook’s own employees warned that after it changed how it measured the success of a post in 2018, it incentivized angry engagement and misinformation. In 2021, another employee found that new users were being quickly pushed by their algorithms towards QAnon groups. Internally, Facebook staff have been warning about problems and proposing solutions for years. We also know, thanks to Facebook whistle-blower Frances Haugen, that the company chooses not to implement any fixes (in any reasonable time frame) that would hurt engagement.

Meanwhile, hundreds and thousands of people were being radicalized into the far-right, estranging them from their families, and movements were recruiting people on Facebook that would culminate in murders and mass murders and incidents like January 6th and the Ottawa convoy.

If the ombudsperson had access to this information in real-time, they could, for example, issue a timely recommendation that the company has to undo the algorithmic changes that incentivized angry, divisive posts and misinformation.

We propose that if the company doesn’t want to follow a recommendation, the ombudsperson be empowered to apply to a court to make it an order. The court would apply two tests. First, is the order consistent with the goals of the regulator. Second, is it consistent with the Charter and previous rulings. Here, groups can intervene and put their arguments to the court.

If the order is granted, and the company doesn’t follow the order, we propose that they face significant fines (the same as envisioned in the technical paper).

Of course, this is just a starting point and a framework. But we would like to keep it simple and move it along.

Here’s the upside – this is way faster than putting in a complicated complaints regime, it does away with the contentious police involvement and 24-hour takedown pieces, and isolated free speech vs. hate speech arguments can be had later, in front of a court.


We urge you to contact your MP, send them this article, and tell them it’s time to make the social media platforms be at least a little socially responsible.


The government asked for our feedback on the technical paper, and for a path forward. This is what we sent them.

CAHN Response To Proposed Online Harms Legislation

What we like about the proposals so far:
 

  • We like that the government is taking the issue of online harms seriously.

  • We like that the government is intending to establish a new regulatory body responsible for this issue. We also like the idea of an Advisory Board, although we would probably construct it somewhat differently than how it's described in the technical paper.

  • We like the emphasis, in the technical paper, on creating new requirements for transparency from platforms, and creating new reporting requirements.

  • We like the requirement to make it easier for users to report harmful content.
     

What we don't like:
   

  • We are concerned that the proposed approach to handling online harms doesn't seem to be rooted in a government-wide strategy for digital overall. The government has a number of initiatives underway that touch on digital matters, including this new approach to handling online harms, new privacy legislation, changes to the Broadcasting Act, and an initiative related to the funding of journalism. All these efforts need to be animated by a coherent strategy, and we are concerned that they are not. At this point, it is not clear what the government's vision is for Canada's digital future. That's a problem. It's hard to imagine us making much progress if it's not clear where we're trying to go.

  • As imagined to date, the proposed approach to handling online harms is fundamentally reactive in nature. It is aimed at identifying and removing harmful material after it has been published. We think this is fundamentally the wrong approach because it is unresponsive to the true nature of the problem. The reality is that there is a continuous firehose of harmful material being posted to the internet by many different bad actors, and there is a multitude of harmful content available online at any given moment in time. Any approach that aims to solve this problem reactively, by creating a process for the evaluation and removal of individual pieces of content post-publication, will fail. It will fail because i) it will rely too much on individuals reporting harmful material, which is an inappropriate burden on those people, ii) the majority of harmful material will never be reported, iii) it is impossible to design a takedown process that provides sufficient protections against unjustified takedowns and also speedily removes content that is actually harmful, iv) it will require the creation of an enormous bureaucratic machinery and will impose serious administrative burden on many different parties, and v) it does nothing to change the incentives for the platforms, which currently have a strong business incentive to continue allowing sensationalist material, including hate speech and other harmful content. Additionally, any approach primarily or solely focused on content takedowns will inevitably trigger serious free speech concerns, as we have seen.

  • We don't like the 24-hour takedown requirement. Civil liberties groups are concerned that the 24-hour takedown creates a requirement for platforms to remove/take down online speech. They believe all speech is good speech, and ought not to be constrained. That is not our concern. Our concern is narrower and more specific. We don't like the 24-hour takedown because we believe it will have the unintended consequence of providing a mechanism enabling bad actors to make false reports of hate speech against material posted by equity-seeking groups, resulting in that material being taken down. To be clear: Our concern is that the 24-hour takedown will be abused, resulting in platforms taking down material that is not hate speech, and which should not be taken down.

  • We don't like automatic/mandatory reporting to law enforcement. Law enforcement has existing avenues for accessing the information they need to do their work, and we don't believe massively increasing the amount of information to them, in an automatic/mandatory way, is necessary or would be net beneficial.
     

What we think should be considered going forward:
 

We believe the best path forward would focus on the creation of a new regulatory body, with three main emphases, as follows.

  1. A solid emphasis on creating new transparency and reporting requirements for platforms. The platforms are currently a black box. Their practices, and the societal implications of their practices, are not yet well understood. Other jurisdictions have been significantly ramping us transparency and reporting requirements, and Canada should do this too.

  2. The creation of a new affirmative obligation for platforms at a high level, requiring them to consider the societal effects of their practices and take steps to mitigate harms in the public interest. This is a practical acknowledgement that the platforms know more (and will always know more) about their operations than any regulatory body can possibly hope to know. We have learned from whistleblowers that there are people and divisions inside social media companies whose responsibilities include user protections, and that those people and divisions are routinely overruled in favour of what best serves the companies' business interests. In creating affirmative obligations to consider user interests, we would be aiming to add weight to those internal user advocates, to tilt the balance internally towards better user protections, and better protection of the public.

  3. The creation of a mechanism whereby the new regulator is empowered to routinely and continuously investigate emergent harms and create orders protecting against societal harms. For the purposes of this note, we are calling this "the Ombudsperson approach." It is further detailed below.
     

The Ombudsperson Approach
 

We have been socializing the idea of an ombudsperson approach with other civil society groups, and it’s finding support. Some of the ideas here are, in fact, already in the technical paper.

Here's what we've been imagining:

The ombudsperson (and their office) would be empowered with broad investigatory powers so that they can compel evidence and testimony from large online platforms.

They would be empowered to make public recommendations, in keeping with a charter of values.

Where the companies do not follow the recommendations, the ombudsperson may apply to a court to make those recommendations into orders.

The court has a short time window to apply a two-pronged test. First, does the proposed order align with the charter of values? Second, is the proposed order in keeping with the Charter and previous jurisprudence around hate speech?

Interveners could make submissions at this stage.

If the order is granted, and the company or companies do not comply, they will be subject to the same strict financial penalties laid out in the technical paper.

The ombudsperson may take complaints as evidence of an issue, but does not address individual pieces of content.

In an emergency situation (eg. Jan 6th), the ombudsperson may make an emergency order which goes into effect immediately, but also convenes an emergency hearing with the court to either uphold or quash the order.

We would leave it at that. The ombudsperson approach would move things forward in a real way, while kicking individual issues that are causing so much opposition to the ombudsperson, interveners, and the court at a future date.

Latest news

Make a donation

Email: