Meta, Twitter and YouTube aren’t ready for Election Day misinformation

ADVERTISEMENT

Meta, Twitter and YouTube aren’t ready for misinformation this Election Day — just like they weren’t in 2020

The midterm elections are less than a week away — and tech platforms are ill-prepared for handling political disinformation ahead of the vote, according to a recent report from 60 civil rights organizations calling themselves the Change the Terms Coalition.

The problem of online disinformation came to the fore around the 2020 election, especially after former president Donald Trump refused to acknowledge his loss. But tech platforms such as Meta, Twitter, TikTok and YouTube haven’t improved their tactics for dealing with election misinformation — and they remain inadequate, the report concluded.

“Election misinformation and disinformation are not anecdotal or seasonal. Lies — particularly the brand of election-denialism rhetoric that rose in 2020 — have been ubiquitous online for years,” it found. “This crisis has no end in sight.”

Grid spoke with Nora Benavidez, a senior counsel and the director of digital justice and civil rights at Free Press, a media advocacy group, who led the report.

ADVERTISEMENT

This conversation has been edited for length and clarity.

Grid: Why do this report now?

Nora Benavidez: Free Press has been working for the last year now trying to better understand what major social media companies are doing when it comes to hate harassment, disinformation and other messy gray area types of content that can include conspiracy theories and other areas. Knowing that the midterms were coming this year, we felt like it was an important inflection point to make sure that the lessons learned from 2020 were applied robustly and very early for 2022. We started engaging with platforms when we launched our “Fix the Feed” campaign in April, and that was a very large campaign that was anchored by the Change The Terms Coalition, a coalition of over 60 civil and consumer rights groups. We tried to come up with demands that were very basic — low-hanging fruit measures that all of the companies should have in place year-round. These are not rocket science, niche or controversial issues.

G: Can you tell us more about these basic steps you recommend?

NB: These are things like making sure the companies themselves don’t amplify hate or lies, which we know they do and we have some good evidence of that. We wanted to encourage and pressure them to have civic integrity teams year-round, implemented with enough staff across languages so that non-English disinformation, for example, is clamped down with the same force that English is. And then finally, we wanted to try to really understand more about what the companies are doing and develop transparency. We wanted to push the companies to develop transparency by sharing data and giving access to a number of different frameworks, whether it’s their API or other frameworks, researchers, journalists and civil society should have access to that.

ADVERTISEMENT

This is basic stuff that I don’t think anyone would find incredibly difficult, and these are easy measures for the companies to implement. We met with the companies and we engaged with them in writing, trying to show them the breadcrumbs of why democracy and why civic engagement is often predicated on and influenced by social media rhetoric, and pointing to the role they play in either fomenting violence in the real world or in instances where we know, for example, Facebook actually did implement measures in 2020 that can really help limit engagement on the worst stuff. We continued to get very little from the companies. They met with us, but they didn’t provide information.

At best, we got one written response from TikTok, which basically was a rehash of their own election integrity announcement in August. And all of that pointed to just the ongoing question of, well, what are the companies doing? How are they prepared? Are they prepared at all for what is coming this election season? And that is why we wrote the report.

G: What was your top-line takeaways from this report. What surprised you most?

NB: The thing that jumps out the most is that even after the violence at the Capitol on Jan. 6, and following investigations that revealed the role these companies play, they are all not sufficiently investing to protect users or democracy.

Just time-wise, the companies all failed to update and announced their election integrity plans ahead of the midterms with sufficient runway to really clamp down on problems. They treat these issues as sort of anecdotal and seasonal. So they’ll string together whatever their announcements are, and they all put these out in late August or the first day of September. That gives them only two months to try to limit engagement on stuff that may in fact lead to distrust from voters and otherwise — that isn’t enough time.

We pressured them when we met with them. We said you need to announce and roll out these measures far earlier. They need to have some of these measures in place year-round. So just time-wise, the optics alone point to that they’re not really committing here when they roll these out so lackadaisically late in the game. I think the other thing that jumps out is a procedural issue, if you will, which is how hard it was to research and identify what they’re actually doing.

We often hear and read reports that a single platform is failing to do something. Global Witness and [New York University] just put out a really good report that we cite in ours about what Meta and TikTok are doing with their political ads. That was great. It was sort of like tight, laser focus, one thing — but we never get a holistic or comprehensive view of what’s really happening at the 50,000 foot view.

These companies hide the ball. They create this labyrinth through links that then link to other things from five years ago. You know, in-app centers. Then they have other portals that take you to a different set of Q&As that take you to a terms of service page, which take you to insights from 2020 in the context of maybe Germany’s election. As a researcher, you’re trying to kind of sift through what feels like this massive, complex and very conflicting set of written concepts and promises. It really makes it almost impossible to understand what they’re really doing.

G: What do you think are some of the most damaging things that you’ve seen in terms of this and looking at sort of this misinformation?

NB: I think that the targeting of election workers and election officials is really dangerous. Companies are failing to address that fully in the written policies. They are allowing sometimes the content that targeted individuals who are election workers to languish on their platforms. This is sort of a one-two punch of both misinformation or disinformation I would say — and hate. It’s all predicated on the Big Lie [that the 2020 election was stolen from Trump.]


ADVERTISEMENT

These are angry people who post things, encouraging others and calling for something to happen to the election workers. So it’s both sort of, it’s this weird weaving together of hate and lies, and it’s really dangerous. It means that people are afraid to go into the office — they’re afraid to do their jobs. And we need to have a very strong narrative at this moment that our elections are still free and fair. That means that anyone who is registered to vote can and should be voting for whatever party or candidates and that are election workers are the front-line people who are helping to make sure that civic engagement happens. Instead, these people are so afraid because of what they are inundated with. And seeing examples from what I have seen firsthand with colleagues is terrifying, not to mention the examples, one of which we have in the report that just shows that the companies are not actually addressing these things.

One other thing I do worry about is the disparities in enforcement of non-English content. This is something we’ve been looking at now for over a year at Free Press. Change the Terms has also called for parity in resourcing for non-English moderation and enforcement. And we are just simply not seeing it — whether it is because the company don’t want to share with us how many moderators they have in non-English, maybe because it’s that damning, or they don’t actually know how many moderators they employ. We have seen examples where content that has actually violated terms in English gets removed or actioned relatively quickly and then in Spanish, it stays up for 11 months.

G: What do you think will happen now that Elon Musk owns Twitter?

NB: I think we have every expectation that Twitter will only become a more toxic, dangerous platform for users. It will influence conversation about the election. I don’t want to speculate by naming the conspiracy theories or other types of electric denialism rhetoric I think we may see, but I don’t expect that those types of lies and narratives will be limited the way they should. They should have virality circuit breakers in place. They should have friction. They should have moderators reviewing all of these various things we asked for from the companies over the last year. But I don’t think that it’ll get better. I think it’s going to get worse.

Thanks to Alicia Benjamin for copy editing this article.

  • Benjamin Powers
    Benjamin Powers

    Technology Reporter

    Benjamin Powers is a technology reporter for Grid where he explores the interconnection of technology and privacy within major stories.