How the internet is training AI to make better disinformation – Grid News

Get the best of Grid in your inbox, every day.

How the internet is training AI to make better disinformation

The internet is rife with disinformation, and bad actors are becoming more skilled at spreading conspiracy theories and falsehoods online.

Just two months ago, Facebook took down a Chinese-driven campaign to push covid-19 misinformation. But many other disinformation peddlers escape detection. And the growing sophistication of artificial intelligence systems could amplify the problem, from large language models like GPT-3 that can generate propaganda text to AI-created avatars that look like real people.

Grid spoke with Katerina Sedova, a research fellow at the Center for Security and Emerging Technology at Georgetown University who studies the intersection of cybersecurity and artificial intelligence, including disinformation campaigns. The interview has been edited for length and clarity.

Grid: You and your co-authors have put out a series of reports on AI and disinformation. What was the impetus behind them? Why do this now?

ADVERTISEMENT

Katarina Sedova: There’s been rapid advances in AI over the past few years, in part fueled by innovation and machine learning, which in turn is fueled by the accessibility of increased computing power and increased data. A lot of the automated systems that are coming online are capable of harnessing the massive digital footprints that we leave behind in our digital lives, using them as training data and starting to generate some insight as well as content from them — content that humans have difficulty deciphering as content that is generated by machines rather than humans.

Deepfakes have sort of drawn outsized attention and angst and hype in this space, both from the technologists and policymakers.

We wanted to zoom out and ask, “Well, what else are we missing while we’re focusing on deepfakes?” Is this really all there is when you think of AI and disinformation in the same sentence? Are deepfakes the only thing that exist?

G: How do disinformation campaigns work? What are some of the other ways that AI is enabling disinformation?

KS: The best-known example [of a disinformation campaign] in the U.S. is the 2016 presidential campaign. There’s very detailed examination of the tactics Russia used in the congressional reports and social media company reports as well. So let’s take the steps that [Russia] might have gone through.

ADVERTISEMENT

First, they have to understand the target society that they’re going to be trying to influence. This is part of the reconnaissance stage. You try to ingest as much as you can about the information environment. What is a society talking about? What are some of the political fissures that can be exploited? What are some of the historical grievances that can be exploited? We know that disinformation actors try to exploit something that can connect to their audience, something that may have a kernel of truth.

In order to do that, you must do your homework. That’s much easier with the kind of digital footprints that we leave as a society on the web. AI may ingest not only social media, but broadcast media. “Scraping” means a program crawls the web, tries to ingest as much information as possible that is publicly available and tries to decipher how a society talks about particular issues, a particular brand or a particular issue.

AI tools can make this a lot more powerful. Especially with human curation, some of these tools are becoming much more nuanced.

The reconnaissance stage is enabled much more so by what AI can provide.

G: What happens when it’s time to start building the campaigns?

ADVERTISEMENT

KS: Campaigns need accounts; they need to have messengers. Some of what we’ve already seen is the use of AI-generated profile photos in campaigns in just the past year.

The Russian Internet Research Agency [backed by the Kremlin] had floors and offices upon offices of people generating different types of content for their campaigns. You had the graphics department generating memes and crafting sort of visual media. You had specific operators that were tasked with writing short tweet threads, you had other operators tasked with writing long posts, you have other operators tasked with rewriting articles with a particular slant reflecting the talking points of the day.

You can already see how some of the general generative capabilities of language models as well as general adversarial networks can significantly scale up some of these campaigns by merely helping operators get faster and better at generating content. Not to mention the language barriers that are now significantly removed if you deploy a language model, for example, that can generate content in a different language.

G: How does AI help bad actors deploy misinformation?

KS: You have any number of techniques that are already kind of automated to get [the content] out there in the world.


We’ve known about bots for some time, right? So how do these bots get better with AI? A lot of the bot detection systems look for specific signs of an automated script. What we found is that some of these bot systems can potentially get much better and much more humanlike using AI.

Beyond that, moving into something that we have seen Russian and other actors do is actual one-on-one engagement and trolling people on social media posts and even in the comment sections of major news outlets in the U.S. That operation can get more automated than it is today, especially as chatbots get better, as large language models get better, and can essentially start to automate this kind of trolling at scale.

Finally, we get to what we call an actualization stage, when threat actors essentially could start enrolling people into taking over their own messaging, building their campaigns themselves and engaging in real-world activity.

G: Were there other novel uses of AI that really surprised you?

KS: This is where the combination of chatbots and deepfakes can really knock your socks off. There are already ways in which people have the ability to clone their own voice, clone their own image, feed the system their own text messages and essentially do a video call with a colleague [where someone might use] a digital clone. This requires tremendous access to user data, which hopefully we can throttle and prevent. But you can imagine how these kinds of clones of personal friends or acquaintances, or maybe kind of loose professional acquaintances, can be potentially used to radicalize people.

ADVERTISEMENT

This may sound like science fiction, but technology is in place to possibly stitch all this stuff together to make that happen. While I don’t want to increase the hype with this report, I think it’s important for us to start understanding how a threat actor might tease and pull on all of these threads to actually start building something like this.

G: It would be surprising to me if nation states were not already engaging with the exact sort of processes you’re talking about. Did you come across any signs of that?

KS: Yes. We already know that as part of several campaigns, threat actors have used AI-generated photo profile photos, for example. That’s been made easier by the existence of This Person Does Not Exist. We have adjusted to that, and now we know how to detect some of that particular angling from that particular model. But other models are out there. There’s nothing to say that a really well-funded, nation-state organization couldn’t have their own model that isn’t public and generates these kinds of profile pictures.

Part of the challenge is that AI is an open field. It releases its research openly, and that has driven innovation. It’s also a double-edged sword because threat actors can leverage that. The other thing is that, you know, lots of people have hypothesized that generative language models like GPT-3 could be used to create disinformation. My colleagues and I were able to test those capabilities.

G: Tell me more about that.

ADVERTISEMENT

KS: Last year, we had an opportunity to work with GPT-3. At the time, it was a novel writing system that essentially works like autocomplete on steroids. The research organization [that created GPT-3], OpenAI, was very concerned with misuse of this model, so they allowed academic organizations to test the system. We tested it with six techniques that the Russian Internet Research Agency had deployed in the past and along with other threat actors as well.

It excelled at all of them.

It was able to rewrite articles with a slant, for example, based on what we told it the slant was, and it was able to generate tweets on a theme. We tested it to generate tweets on the theme of climate change denial. It was able to produce a whole campaign about that. It was able to write posts in the style of the QAnon conspiracy theory and write politically divisive and hyperpartisan messages, something that you would imagine appearing on a sort of a fringe forum or hyperpartisan forum.

It was even capable of writing articles that persuaded human test subjects to change their mind on controversial international foreign policy issues. It was able to convince a third of our subjects to change their opinion and oppose sanctions on China. This just gives you an idea of how capable these models are. They’ve been used to produce Guardian op-eds and write Shakespearean poetry.

We also tested it for disinformation, and it may be better at writing disinformation than it is at writing actual factual information.

ADVERTISEMENT

The way that these systems work is as part of a human-machine team. So an operator prompts it with a sample of writing and gives it length requirements, etc. The system then starts generating text based on that, mimicking the style of writing and corresponding to the parameters that are set by the operator. This could help the operators produce much more content than they do already.

We have large language models with hundreds of billions of parameters acting like neurons in the brain. The more parameters, the more capable the model is. Complex systems have now been either invested in or released in Russia and China, South Korea. There is nothing magical about them. They just depend on a lot of data and computing power.

G: We’re talking about the internet, so the more information that is out there, the more rabbit holes there are for people to go down.

KS: Yes. And we still don’t have very good means of measuring [disinformation] campaigns’ influence. While someone may be exposed to something for a brief second, that doesn’t necessarily say anything about whether it left an imprint. As a research field, we’re having a lot of trouble understanding to what extent these campaigns have impact.

That’s an important piece that a lot of us need to kind of invest much more energy into — how can we measure the change in society’s overall perception of an issue, because all of us feel that polarization is increasing. We feel it viscerally.

ADVERTISEMENT

G: How can we combat this disinformation arms race?

KS: Technology has a lot to add here. Technology created this particular avenue for exploitation of a human society, and technology can help mitigate it. One of the biggest research questions is how do we identify AI-generated content, period.

We have some solutions going on in the deepfakes space. Unfortunately, the systems that generate deepfakes are in a race with the systems that detect them, and they are learning from each other. It’s also very difficult to detect AI-generated text because to a social media platform or an internet browser this text looks like any other text on the internet.

If we don’t have the technical piece to help, then the content-moderation decisions are just going to get harder for the social media companies.

The same goes for chatbot systems. Chatbot systems need to be labeled, period. Humans need to understand when they’re engaging with an AI system online. We have a lot of technical solutions there to tackle but also policy solutions intentionally mandating these things.

G: What does the public need to know?

KS: As much as I think that we have more that we can do with technical mitigations, we can’t code our way out of this problem. It will require a focus on the human that is on the receiving end of the message, which isn’t a new idea. We’ve discussed a lot about how do we educate the population about digital literacy? How do we raise resilience?

We need to start thinking about how we raise resilience and educate the population when it comes to identifying artificially generated speech and artificially generated content, to the extent that’s possible. To help them understand how everything they do online can be translated into potentially being targeted by influence operators. But also start sort of thinking systemically about things like how do we educate people about things like deepfakes without fundamentally eroding trust?

Ultimately, that’s one of the more dangerous aspects of this. If everything looks like a deepfake, then we can’t trust anything we see.

G: Where is the policymaking landscape leading on AI and disinformation — do you think it’s being addressed with nuance and care?

KS: I have a very unsatisfying answer — it’s complicated. I think we are in a much better place than we were in the United States five years ago in the aftermath of the 2016 presidential campaign. But the technologies are evolving. The horizon is already kind of here.

  • Benjamin Powers
    Benjamin Powers

    Technology Reporter

    Benjamin Powers is a technology reporter for Grid where he explores the interconnection of technology and privacy within major stories.