Photo: online, disinformation spreads like….
Continuing today’s disinformation theme (readers-via-email, be sure to check out Deepak Puri’s piece below), let’s talk about neo-nazis on social media:
Over the years, these groups used an evolving set of organizing techniques to spread extremist messages to larger and more mainstream groups of people online. They found ways to game the algorithmic feeds of Facebook, Twitter, and YouTube, so that their new audiences didn’t necessarily know they were being radicalized. And there’s reason to believe this is only the beginning, since these platforms tend to amplify provocative content.
The quote comes from an examination of digital extremism published on Vox/Recode, which looks at neo-nazi and white supremacist digital organizing all the way from the electronic bulletin-board days to now. Back in the ’80s, a potential recruit would have had to seek out online white supremacists directly, perhaps after hearing about them at a gun show or a militia meeting, but the ringleaders’ jobs are easier now. They migrated to the web as soon as they could, but their reach really expanded once Facebook and YouTube began to highlight “engaging” content. Just as more-mainstream political actors have learned to create content designed to resonate online, the far right figured out to get images and videos in front of people who would never encounter them in everyday life:
These tactics helped these racist and harmful memes hop from platform to platform, leaving the relative obscurity of 4chan and finding some more mainstream traction on Reddit or Twitter as the alt-right learned how to game sorting algorithms in order to get their memes in front of bigger and bigger audiences.
Just as we saw with divisive Russian content in 2016, and QAnon posts and anti-vaccine videos more recently, these groups did not need to spend a dime on advertising to get their messages in front of millions of people:
“By the time we go from the memes about Obama to Pepe the Frog, the folks on the far right are incredibly adept at figuring out how to use the algorithms to push their content forward,” explained Jessie Daniels, a sociology professor at the Graduate Center CUNY.
Besides showing the almost inevitable consequences of valuing engagement above all else, this dynamic confirms how stupid it is to ban digital political advertising. Ad bans only block legitimate campaigns and interest groups; bad actors will always find ways around them. When you’re not limited by truth or decency, you can easily create content outrageous enough to ignite the digital underbrush and burn out of control, no advertising required.
Meanwhile, Facebook’s algorithms downgrade stories from news organizations or grassroots political groups, keeping the good guys from reaching more than a tiny fraction of their own supporters with any given post. With facts locked in the corral and disinformation roaming free, guess which naturally rules the range? Legitimate activist groups can’t even pay for the privilege of competing, bound as they are by rules their corporate counterparts often skirt with relative ease.
As I’ve said before, Facebook, Google, Twitter and their counterparts want to be the center of our digital worlds, but they want to shirk the responsibilities that come with that status. As long as they stack the deck against the good guys, the neo-nazis will keep winning new hearts, leaving us all the losers.
Leave a reply