How violent extremists use livestreams and manifestos to build an audience for hate – Grid News

ADVERTISEMENT

How violent extremists use livestreams and manifestos to build an audience for hate

The man responsible for the mass homicide of Black Americans at a Buffalo, New York, supermarket didn’t enter the store alone. Via a head-mounted camera, the killer brought along an audience as he livestreamed his attack via Twitch, an online streaming platform owned by Amazon.

Twitch has said it took down the video within two minutes. It was a “very strong response time considering the challenges of live content moderation, and shows good progress,” the company said in a prepared statement to the New York Times.

Still, clips from the video and the manifesto continue to circulate online. And experts say they will increasingly accompany extremist bloodshed. Such gruesome live broadcasts have become a common component of far-right violence. So too are online manifestos, rank with call-outs to racist ideologies, conspiracy theories and even memes. Mass murderers use these tools to amplify not only their actions but the beliefs underpinning their attacks.

The Buffalo shooter’s vile content was designed to spread far and fast, said Ciarán O’Connor, an analyst at the Institute for Strategic Dialogue, a nonprofit think tank that focuses on extremism and disinformation.

ADVERTISEMENT

And it did. In the aftermath of the attack, users around the world shared imagery of the atrocity pulled from the video and details from the manifesto on traditional social media platforms like Twitter and Facebook, as well as in the 4chan forums where the shooter says he was radicalized over the last two years.

“These extremist communities are very versed in how media exists and is shared on social media,” O’Connor told Grid. “They are also often determined to use these kinds of incidents as propaganda for further promotion of their ideologies, [and] to glorify the acts of different extremists.”

O’Connor focuses on how far-right content spreads across the internet. In an interview with Grid’s misinformation reporter, Anya van Wagtendonk, he spoke of the difficulty of tamping the flow of the post-Buffalo imagery, and the social platforms’ responsibility in addressing violent, hateful content. He cautioned that the coming days offer fertile ground for online extremists seeking to exploit the attention paid to this mass shooting.

“The content from the day itself, but also the content that the shooter had created or shared online themselves, will be used by extremist communities to prolong interest in this attack, [and] also to amplify the motivations and the extremist ideologies underpinning that,” he said.

This interview has been edited for length and clarity.

ADVERTISEMENT

Grid: Why is livestreaming such an important tactic in these kinds of attacks?

Ciarán O’Connor: It seems to be that violent extremists, be it in Christchurch or in Halle or other examples, are using livestreams to maximize the pain and to maximize the impact of their attack. They’ve realized that, beyond the actual livestream, this footage is quickly clipped and reposted, and it has a shelf life beyond the events of a given day as well. So there’s a certain media awareness on the part of the violent extremists that they know how content gets shared online, and they know that the clips can be shared long past their event.

The fact they choose livestreams is also likely because live content is very hard for platforms — be it the mainstream platforms like Twitch or Facebook or YouTube but, even more, those alternatives — it’s a hard thing to monitor and moderate. If you’re determined to … maximize the eyeballs to see your attack, then using something like a livestreaming platform offers you the opportunity of broadcasting a lot before it’s ultimately taken down as well. But it all comes back to just maximizing the impact of the content and creating as much hurt and pain and outrage and trying to broaden the reach of the message that you are trying to spread, and the ideologies and conspiracies underpinning that as well.

G: How do the videos and imagery from the attack spread so far and so fast? In this case, there weren’t actually that many viewers of the live event.

COC: It spreads because people want to spread it. There are communities online, forums and websites that are populated with people who regularly post violent or extremist content. In the hours after the shooting on Saturday night, I was watching a couple of communities on different forums who were eagerly discussing the chances of getting the full version of the footage. There is a motivation or determination amongst some quarters online to not only find this footage, but to share it as widely as possible.

And that’s the immediate footage. These videos also get clipped and shared as shorter clips as well. In the immediate aftermath of the attack, there were two main clips that were being shared. … Saturday night and Sunday morning, I was finding versions of these clips being shared on Facebook and on Twitter that had tens of thousands and hundreds of thousands of views.

Judging how platforms will have performed in either succeeding or failing to take action against the content — we’re still a couple of days away from really being able to inspect and ask that question. We’re still waiting for more information from Twitch about the livestream and the user and the people who viewed it and all these kinds of different things.

G: So much extremist action is on these more obscure forums, ones with less accountability to the broader public. Is there a solution to stop these images and ideas from spreading on platforms like 4chan or Gab?

COC: The challenge lies in websites that have either little or no terms of service, or no content moderation guidelines. Getting content removed from sites like 4chan or Gab or Telegram is notoriously difficult. Things like legislation being introduced in the EU or in the U.K. are one way to tackle this at the mainstream level. But often these other websites are spaces that simply may not meet the criteria.

Essentially, it’s very difficult because these platforms not only don’t have kind of guidelines or habits of enforcing the guidelines, but they also seem ideologically determined for their spaces not to be places where content like that is removed as quickly as possible. And that’s a big challenge for governments, or for researchers like ourselves. It’s also a challenge for the communities that are regularly exposed and targeted by extremist communities on these websites.


ADVERTISEMENT

We’ve kind of lived through a decade or more of allowing platforms — mainstream and more alt as well — to kind of set their own terms and find ways to [address] challenges and deal with this kind of problem. And it really hasn’t worked.

There’s of course been very positive moves, things like [Global Internet Forum to Counter Terrorism] consortiums that have a content-sharing protocol. All those things are positive, but it does seem as though it’s almost like a Whac-a-Mole approach. So legislation that will force systemic guidelines, and hopefully force systemic change, will be the way forward, but even then, it also still poses a challenge to how effective they may or may not be for those alt platforms that we mentioned.

G: Given the way that the images in the manifesto spread, what are best practices for social media users and news consumers for engaging with the material from this attack?

COC: The first point is that nobody should share the video or manifesto. In many jurisdictions, it’s a criminal offense to share that material. The manifesto is a trolling document. It has many memes and references and nods to extremist ideologies within it, and it was designed to garner coverage from media or from researchers. That is why they should be critically discussed, yes, but should not be shared. And if you encounter the video, in any version online, you should report it to the platform.

But it’s difficult, because this stuff travels in many different ways: as screenshots, or as a WhatsApp video or as a link that was posted on one mirror site that was then shared on Facebook.

ADVERTISEMENT

Yes, livestreaming terrorist attacks are quite a new phenomenon. But the idea of terrorist attacks that have been shared and disseminated online, and manifestos as well, that is not so new. When terrorists create or write these manifestos, they do it with the desire that they want to be shared, to broaden the reach of their attack or the ideologies that support them. And they create these documents to be shared and discussed. And to [share them], you are doing their work for them.

The danger of these kinds of manifestos at a moment like this, when they have maximum attention, is that they may expose someone to beliefs that they weren’t previously exposed to. That might ... lead someone toward being more exposed to extremist ideologies. So it’s a challenge for society but also one for the first social media platforms and online spaces, too, that we don’t unwittingly expose people to extremist ideology.

G: Is there a solution to preventing another copycat event like this one?

COC: There’s no one silver bullet, but things that come to mind are platforms taking greater focus on moderating live content, or putting in guardrails for users who may show signs of engaging with the material, if they are there. But the Halle shooter hadn’t gone live on the platform, for example, so it’s not as easy as just saying, “They’ve done this before, don’t let them go live.”

It just comes back to what [mainstream] platforms can do to limit the exposure … to this kind of material. And then for the more alternative spaces, if they would root out this kind of behavior and activity on their platforms, that will go a long way, too. But at the same time, removing one website that is used by extremists will not result in extremism disappearing as well. It’s a much wider societal problem that we’re dealing with.

Thanks to Lillian Barkley for copy editing this article.

  • Anya van Wagtendonk
    Anya van Wagtendonk

    Misinformation Reporter

    Anya van Wagtendonk is the misinformation reporter at Grid, focusing on the impact of false information on policy, elections and social behavior.