By renowned social media and data sciences expert Megan Squire.
While a step forward, the law is probably still two years away, and will be hotly contested.
The question I get most on the subject of de-platforming racist/fascist/white supremacist types is "If we kick them off mainstream sites, won't they just go to other places and [fill in bad stuff here]?" In this thread I'm going to explain why this is a poor rationale.
First, we need to remember that de-platforming directly undercuts the Bad Guys' ability to achieve their 3 main goals when using social media: Propaganda, Organization, and Trolling/Harassment. Specifically...
De-platforming disrupts propaganda & recruitment. They need to be on big mainstream social media platforms to normalize their ideas & to spread them. By removing Bad Guys from mainstream social media, it is harder for them to appear normal & for normal people to be recruited.
De-platforming disrupts planning & organization. Platforms that have both a propaganda side (public posts) and an organization side (private chats) allow seamless pivot, and this is especially dangerous if the private chats are encrypted (ie, Telegram).
Examples: Facebook Pages = propaganda, Groups/Messenger = organization. Twitter timeline = propaganda, DM = organization. Telegram channels = propaganda, Chats/Messages = organization. You get the idea.
De-platforming from mainstream sites forces the group to use multiple, unfamiliar platforms AND continues to chip away at the facade of normalcy.
De-platforming disrupts harassment/trolling. A major "fun" activity of Bad Guys online is harassing other users, either "normies" or people in some ethnic/religious/etc group they don't like. Removing them from mainstream social media reduces this toxic behaviour for everyone.
An argument I hear a lot is "But if they go to other places won't they just be harder to track?" This usually comes from folks that don't actually KNOW how to track these groups on those other platforms so they might be worried that the job will get harder for folks like me.
But that's a poor rationale. Yes, it might make my job harder initially because I'd have to learn a new platform, but I actually love this challenge. It's FUN for me. Also, there are a few things that almost always offset this cost...
(a)Some "Alt" platforms are often EASIER to systematically collect data on (ie, compare Telegram API to Facebook's.. zilch), and (b) The bad guys are not as adept at using Alt platforms either, so they make a lot of mistakes.
(However, I'm also very much watching the slow creation/adoption of uncensorable platforms, distributed web, cryptocurrency, etc. These will necessarily change the disruption discussion in the future away from de-platforming and towards other strategies. But we're not there yet.)
Another poor argument I've heard against de-platforming is the "petri dish" effect: if you kick them off mainstream social media they'll end up going into an petri dish/echo chamber where they will be radicalized faster. But...
..this reasoning discounts the fact that by the time someone gets de-platformed, they're ALREADY engaging in toxic behaviors - they're just doing it with the blessing of a normie platform and hassling the rest of the users there as well.
Anyhow, I hope this thread helps folks understand some of the variables we can use when thinking about de-platforming as a strategy for online safety.
Editor’s note: Also, de-platforming takes away a revenue source for their content creators and propagandists!
Megan Squire is a professor of computer science at Elon University (NC, USA) and a Sr. Fellow for data analytics at the Southern Poverty Law Center. She uses data science techniques to study extremism in online communities. This piece was re-published with Squire's permission.