Canadian Anti-Hate Network

Source: Barefoot Communications/Unsplash
Meta made a widely publicized announcement last week that signalled a shift in the company’s policy towards moderation and fact-checking.
In a video released by the $1.56 trillion company, CEO Mark Zuckerberg bemoaned that “governments and legacy media have pushed to censor more and more.” He stated that Facebook and Instagram would be ending its third-party fact-checking program and moving to a Community Notes model—similar to that used on X, formerly Twitter.
He added that “recent elections seem like a cultural tipping point” and that the new administration would be “prioritizing speech.”
The replacement of third-party fact-checking with "community notes" will happen on the company’s US platform in the next few months, followed by an easing of restrictions on controversial topics that will take place globally.
Many critics have already pointed out that Zuckerberg has been positioning his company to align itself with the incoming President Donald Trump—who previously threatened to jail Zuckerberg—including with a $1 million donation to Trump’s inauguration fund.
Since then the impact of this move both in and outside of the United States has been in question. There is significant uncertainty about how exactly these new policies will be applied, what they will look like, and the impact they may have on democracies worldwide.
The Canadian Anti-Hate Network reached out to Facebook about how the announcement would impact Canada but did not receive a response by time of publication.
"We are beginning by rolling out community notes in the US and will continue to improve it over the course of the year before expansion to other countries," a Meta Spokesperson told CBC. "There are no changes to our third-party fact-checking program in Canada at this time."
She added, "The updates on discourse restrictions are global."
In his announcement, Zuckerberg was clear that the “cultural shift” means Meta lowering the sensitivity on filters that search for certain types of content. While he was not specific about what type of content might become more permissible, there are indications. Meta publishes updates to its moderation policies on a transparency website.
The Electronic Frontier Foundation, an advocacy group that advocates for broader internet freedoms, said in a statement on January 7 that it applauds Meta’s efforts, but that it will be watching closely. Only two days later, the organization released an updated statement, expressing concern over how these changes will impact vulnerable communities.
The EFF has long been critical of Meta’s automated moderation which has been shown to disproportionately downrank or remove content from marginalized people, including LGBTQ+ content.
“It was our mistake to formulate our responses and expectations on what is essentially a marketing video for upcoming policy changes before any of those changes were reflected in their documentation,” the EFF said in the second statement.
They called the new policy changes harmful to Meta’s stated goals of free expression.
Legal Tests
The future will likely hold legal tests for the social media giant. While Zuckerberg told Rogan he hoped that the American government would defend American companies abroad, Meta operates in many countries that have strict laws relating to content moderation and what is permissible online.
Europe has very strict rules around moderation and disinformation, contained within its Digital Services Act (DSA). Tech companies that run afoul of the DSA, including Elon Musk’s X, can face heavy penalties for failures to properly moderate content. X may have to pay up to six percent of its annual revenue for its alleged non-compliance.
After nine years in government, the Liberal party has failed to pass any legislation to deal with online harms, including online hate. Their promised Online Harms Act, which would have had imposed responsibilities on social media companies similar to those in Europe, is effectively dead with parliament prorogued and every indication that there will be an election sooner rather than later.
In an update to its hate speech policy, as first reported by The Independent, Meta removed specific sections from the policy banning statements targeting protected groups. The changes also added exceptions for anti-trans statements.
"We do allow allegations of mental illness or abnormality when based on gender or sexual orientation," the policy reads. This includes removing sections that specifically prohibited claims that there is "no such thing" as gay and trans people.
This policy would seem to be incompatible with Canadian law. A 2019 decision by the B.C Human Rights Tribunal found that denying the existence of trans people is discriminatory and hate speech, ordered notorious anti-2SLGBTQ+ activist William Whatcott to cease and desist in his hate speech, and gave him a fine
Election Disinformation and Misinformation
This leaves questions about how disinformation and misinformation on social media may come to impact Canada’s upcoming election and whether Meta will end fact-checking in Canada, as they have in the United States.
Canada and social media companies worked out a series of agreements in 2019 which were summarized in the Declaration on Electoral Integrity Online. The declaration was meant to outline the shared responsibilities of the government and companies, stating that “some have used these platforms to spread disinformation in an attempt to undermine free and fair elections and core democratic institutions and aggravate existing societal tensions.”
Facebook, Google, LinkedIn, Microsoft, TikTok, Twitter and YouTube all endorsed the message. Part of Facebook’s pledge was to deprioritize political content, a policy that Zuckerberg has now indicated is changing.
Who Does It Hurt?
Despite Zuckerberg’s recent call to return to the “roots” of Facebook’s mission, free speech, his company has been forced to shoulder responsibility for the damage caused by false information spreading across the platform.
Days before elections in Slovakia in 2023, audio recordings allegedly revealing a liberal candidate talking about raising the cost of beer and planning to rig the election were determined quickly to be AI-generated. The recordings nonetheless spread online, particularly over Facebook.
Internal Facebook documents also show that the social media company knew that COVID misinformation was a much bigger problem on the platform than Zuckerberg admitted when called to testify in front of Congress.
Facebook’s own internal documents, leaked by a whistleblower, show that the platform had a detailed understanding of how quickly conspiracy theories did spread on the site. The company also operated a “white list” of public figures with large followings who were not subject to the company’s moderation policies.
An investigation by ProPublica and the Washington Post found that Facebook groups saw at least 650,000 posts attacking the legitimacy of the 2020 US election between Election Day and the January 6, 2021 riot at the US Capitol building. While there was an internal review, Meta did not implement several of the recommendations made by its oversight board following January 6. The company denied any responsibility, instead blaming President Trump.
“The notion that the January 6 insurrection would not have happened but for Facebook is absurd,” Pusateri said in a statement published in the Washington Post. “The former President of the United States pushed a narrative that the election was stolen, including in-person a short distance from the Capitol building that day. The responsibility for the violence that occurred on January 6 lies with those who attacked our Capitol and those who encouraged them.”
In 2013, Facebook was repeatedly warned that hate speech and conspiracy theories about the Rohingya Muslims in Myanmar were exploding across the platform. Facebook is used to release public announcements in the country, quickly making it a vital part of the country’s infrastructure. The increase of Facebook’s relevance in the country also led to it becoming a tool for what a UN investigator called “ultra-nationalist Buddhists” who incited “a lot of violence and a lot of hatred against the Rohingya or other ethnic minorities.”
In 2017, 700,000 Rohingya fled their homes as military forces launched a campaign of rape, murder, and arson against them. Amnesty International said that Meta’s “reckless pursuit of profit substantially contributed to the atrocities perpetrated by the Myanmar military against the Rohingya people.”