How social media sites can stop the spread of deadly misinformation

ADVERTISEMENT

Social media sites can slow the spread of deadly misinformation with modest interventions

Social media platforms can slow the spread of misinformation if they want to — and Twitter could have better reduced the spread of bad information in the lead-up to the 2020 election, according to a new research paper released Thursday.

Combining interventions like fact-checking, pushing people to consider before reposting something and banning some misinformation super-spreaders can substantially reduce viral misinformation from spreading compared with isolated steps, researchers at the University of Washington Center for an Informed Public concluded.

Depending on how fast a fact-checked piece of misinformation is removed, its spread can be reduced by about 55 to 93 percent, the researchers found. Nudges toward more careful reposting behavior resulted in 5 percent less sharing and netted a 15 percent drop in engagement with a misinforming post. Banning verified accounts with large followings that were known to spread misinformation can reduce engagement with false posts by just under 13 percent, the researchers concluded.

Each intervention requires the others to be most effective, the paper published in the journal Nature Human Behaviour, concludes. But “even a modest combined approach can result in a 53.3% reduction in the total volume of misinformation,” the researchers report.

ADVERTISEMENT

The researchers modeled “what-if” scenarios using a dataset of 23 million election-related posts collected between Sept. 1 and Dec. 15, 2020, which were connected to “viral events.” It’s a similar approach to how researchers study infectious disease — for example, modeling how masking and social distancing mandates interact with covid spread.

This simulation work can allow researchers to experiment with how a given content moderation policy may play out before it is implemented, the study’s lead author, Joseph Bak-Coleman, a postdoctoral fellow at the center, told Grid’s Anya van Wagtendonk.

“We can use models and data to try to understand how policies will impact misinformation spread before we apply them. This might be one of many, hopefully, that we wind up using,” he said. “Because the current thing is, we try something and see if it works … so we’re kind of fixing the problem after the fact.”

According to Twitter’s “civic integrity policy,” the company sometimes removes posts, limits their spread or adds additional context when they contain electoral misinformation. A spokesperson for Twitter did not respond to Grid’s request for comment.

Bak-Coleman and his team acknowledged that, without insight into Twitter’s algorithm and content moderation practices more broadly, they cannot account for existing practices. But by implementing a combined approach, they argued, platforms can reduce misinformation “without having to catch everything, convince most people to share better or resort to the extreme measure of account removals.”

ADVERTISEMENT

The effectiveness of these interventions is the first part of the equation, Bak-Coleman added. From there, broader ethical questions can be considered about how and when they should be applied. He spoke with Grid about the role of this kind of research in raising those questions — and how hands-off Elon Musk can actually be if he takes over Twitter.

This interview has been edited for length and clarity.

Grid: Elon Musk is on the verge of buying Twitter. He’s made clear his interest in removing most content moderation for the platform. What does your work tell us about that approach?

Joseph Bak-Coleman: Just taking a huge step back, it’s quite scary that the decisions are gonna be made by a single individual. Because this does profoundly impact both our right to free expression and our exposure to misleading information, which can cause death through things like anti-vaccine views or whatnot. So it’s really scary that he’ll be making those ethical calls, more or less unilaterally, owning the company.

He’s talked about wanting to be very hands-off in moderation. Unfortunately, there is no hands-off moderation. Even places like 4chan and 8chan have legal requirements and things they have to remove from their sites, particularly illegal content, violations of copyright law, that sort of thing. There’s a spectrum from that to somewhere that’s heavily moderated. He’ll have to find himself somewhere on there.

The vague words like “free speech,” they sound really nice as slogans, but you have to actually code that up somehow and make decisions about hard cases. When someone makes a threat and it isn’t really a threat, do you remove that or not?

So, on one hand, I think it’s scary he’s making decisions. On the other hand, I don’t think he quite understands what’s ahead of him.

G: When it comes to the interventions described in your research, what would be the method of implementation? Is that just a platform’s responsibility? Does the government have a role, or other entities?

JBC: That’s a question that hopefully our research can spark. The research says, “This is what we could do with this thing that we see as a problem under various scenarios, and here’s a model people can play with to see how it would pan out.” But what we choose there is ultimately a societal decision, with global consequences.

Personally, I think it’d be nice if there’s a democratic process of some sort involved — the same way we make other hard calls. But the “what we should do about it” is, I think, the question we’re able to ask after having models of what could we do, how might it work.


ADVERTISEMENT

The analogy is climate change, where we have these climate models that tell us, “If we increase carbon [emissions] by however much, then the world will get so much warmer and cause these problems.” And that tells us what will happen under different scenarios, and then we have to kind of choose as a society what we do.

G: How does your research account for the murky middle of misinformation — things that may be factually true but incomplete, or emotionally connective but not quite accurate? Is that something that your research addresses, or is that something for human actors integrating these findings into their moderation systems to be thinking about?

JBC: I don’t think it quite addresses it directly. The model’s pretty agnostic with what the content is. It happens to [focus on] election [misinformation], but you could, in theory, run it on whatever you want to, if you can define things as part of the misinformation corpus. Which is the hard part here, right? Because, like you said, there are things that are misleading but not really harmful. And things that are completely false and not harmful. But there’s also things that are not really totally false, but are misleading and harmful. So there’s this huge spectrum.

So someone has to make a call about how we triage things going into a moderation framework. That’s way above my pay grade.

I think the big missing thing is that we don’t really have a handle on what’s happening in terms of the algorithmic amplification of Twitter. In our data, all we can know is what we saw and what actually spread. So in lieu of all this moderation, there might be options to just adjust their algorithm to be a little bit less engaging, and that might have the same effect. For example, if the algorithms are picking up on anything that’s engaging, they might pick up on things that are engaging regardless of how true they are and kind of undermine normal human truth-seeking tendencies.

ADVERTISEMENT

There probably are some really important algorithmic tweaks that we can’t know that they could make, because we don’t have access to their code and we can’t probe their algorithms. I think the dream where this could go next is trying to make sense of what’s happening server-side on their end.

G: What are the ethical considerations here?

JBC: It’s so important that we start thinking about how we can know what’s going to happen with an intervention before we apply it, because they all come with non-trivial costs and benefits. If we want to make these ethical decisions, we have to know how much it’ll work, right? That’s part of the ethical calculus. I think that that’s kind of the key take-away of the paper, is we can start trying to do this. Not perfectly, not completely, but at least in some contexts.

Hopefully, as — or if — Elon Musk takes over and learns that he has to implement his free speech ideals into code, models can provide a framework for how that’s balanced.

To me, the model suggests that reducing misinformation is much more accomplishable than it was during the election. We could do more. And whether or not we should, I think, is a question we should ask and then have a procedure for implementing. But it’s quite clear that things were much more hands-off than they could have been, for better or for worse.

ADVERTISEMENT

I think some of this is a tension between the core business models of these companies and the problems that those business models cause. So if you want to push the most engaging content, there’s no reason to believe that content would be truthful or beneficial. And if you think about it, the amount of things that are false and/or bad for you are nearly infinite, right? But the number of things that are true and good for you are finite.

So if you have an algorithm trying to select from that pool of most engaging things, it’s probably going to pull a lot of garbage out. Above and beyond all the technical challenges and ethical challenges of moderation, probably the elephant in the room is the business challenge of making it as profitable as it currently is.

Thanks to Lillian Barkley for copy editing this article.

  • Anya van Wagtendonk
    Anya van Wagtendonk

    Misinformation Reporter

    Anya van Wagtendonk is the misinformation reporter at Grid, focusing on the impact of false information on policy, elections and social behavior.