Elon Musk was right: Bots really are everywhere online – Grid News

ADVERTISEMENT

Elon Musk was right: Bots really are everywhere online

Elon Musk has threatened to back out of his deal to buy Twitter if the site can’t prove that bots make up less than 5 percent of its users. It’s a questionable negotiating tactic, but Musk is right about one thing: Bots are everywhere online.

Automated accounts that mimic real people come in different shapes, sizes and functionalities, and most people have a tough time recognizing them. Bots were responsible for pumping out fake news during the 2020 election and have been seemingly pushing covid misinformation.

Tracking how many are online at any given time and what they’re doing is an inexact science, even if you narrow it down to a specific site or application. Bots change their behavior at times to elude detection by the various researchers and companies trying to identify them. Even sophisticated algorithms created to suss out bots can struggle; one widely used tool, Botometer, has at times concluded that Musk’s own Twitter account is more likely to be a bot than a human. Bots have also diversified their behavior and grow more sophisticated over time, making it more difficult to root them out.

“It’s really hard for myself or for anyone to tell if something is a bot or not without computer assistance,” said Kathleen Carley, director of Carnegie Mellon University’s Center for the Computational Analysis of Social and Organizational Systems, who also helped develop BotHunter, another bot detection system.

ADVERTISEMENT

The making of a bot

The most basic definition of a bot is a bundle of code that automates the activities of an account on a platform. Bots can do a wide variety of tasks and behave in different ways, all depending on what they are coded to do. They can be built without much technical expertise and bought in bulk for cheap.

But not all bots are made alike. There are amplification bots, which are used to extend the reach of certain kinds of content, or content from a certain account, through things like retweets on Twitter. Researchers at the University of Maryland recently reported that a bot army helped spread Musk’s Twitter content. There are also chat bots, the kind that you might encounter on a pop-up help chat screen on a website or when they hop into your mentions.

There are even “broker bots,” which Carley said try to bridge differences between two groups or get them to pay attention to each other. Imagine, for example, that an election is drawing near. “If I can build a bot that’s a broker between two groups and make the two groups think that they’re the same and have shared concerns, then I rebuilt the groups and could have affected my political agenda,” she said.

It’s these kinds of behaviors, from amplification to chat and “brokering,” that can make bots as benignly obnoxious as replies boosting cryptocurrency in your mentions along with phrases like “to the moon!” or as potent as potentially influencing an election. And while we often assume bots to be a negative force, it’s not always that straightforward.

“Bots are a little bit tricky because they’re just a tool, right?” said Kai-Cheng Yang, a Ph.D. candidate in informatics at Indiana University. Yang runs Botometer, a tool that lets people input accounts and get a numerical score back that is supposed to indicate how likely it is that the account is a bot. “Everybody should be able to use it to do good things, but of course, you can also use it for bad.”

ADVERTISEMENT

Hunting down bots

Identifying bots on any platform begins with understanding how people behave on it. It might seem obvious, but humans use different platforms in very different ways. There is a reason, for example, that your grandparents are on Facebook but not Twitter, and your brother-in-law is on Reddit but you aren’t.

Any bot hunt begins by collecting lots of data on the ways people behave on a platform and then looking for anomalies, according to Carley. These can include things that are humanly impossible, such as sending out dozens of messages instantaneously or being in multiple locations at once. These become indicators of bots. Since people can now buy the tools to build bots, or buy bots themselves from companies, some of these are marked as bots, but others aren’t. Technically, the Elon Musk flight tracking account is a bot, and the account is incredibly upfront about that fact.

By understanding how tools that are used to build bots work, Carley said researchers can also identify the kinds of things that bots would be good at. Most of the bot identification technologies out there today are based off some form of machine learning.

“They’re trained using these data sets that have been collected over the years that have a bunch of accounts,” said Carley. “The accounts have been marked as being a bot or not being a bot by somebody, typically several somebodies. And then those are used to train the tools.”

For example, say there is a whole new social platform that comes into existence, and no one’s ever seen any data on it.

“The bots on that might look totally different,” said Carley. “And we wouldn’t know because our tools have not been trained for that. So bots on different platforms look different because the platforms are different.”

And bots, or rather their makers, evolve to avoid detection.

Yang said Twitter has gotten more and more aggressive when it comes to taking down accounts, and it’s doing a fairly good job. Even so, he’s seeing more new types of accounts show up, hoping to avoid detection.

“For example, recently, I started to realize there are some bot accounts using fake faces, using neural network to generate the face,” said Yang, whereas before many bot accounts didn’t have human profile photos. “These are human faces that don’t exist, and they’re using them as their profiles.”

Carley said bots began as almost fun random accounts that would, for example, tweet out the time of day every hour, or others that just tweeted out random words. But these evolved into bots that would amplify Chinese state media in one case. And even then, if a bot isn’t retweeting everything that state media account is posting, for example, or is only keying on certain phrases, it takes time to update the identification models given before it might have been retweeting everything.


ADVERTISEMENT

“It’s kind of like the nuclear arms race with bots,” she added.

Twitter, rating bots and the cyborg bot

Even though identifying a bot is hard, Botometer tries. On a scale of 1 to 5, it rates accounts as being “bot-like” with 1 being the least likely and 5 being the most.

Musk seemingly rated as a 3.5 on the scale a week ago, though when Grid checked on Tuesday, that score was down to 0.4. For comparison, the English Twitter page for the media company Al-Jazeera came in at 3.2 out of 5. Grid, meanwhile, was a 2.7.

Yang chalked Musk’s brief 3.5 score up to the fact that the billionaire is a special case. The algorithm Botometer uses has to fetch the most recent 200 tweets from an account to most accurately evaluate the behavior of the user.

“But the problem with Elon is that we can only get something like 20 tweets,” said Yang. “It’s a bug in Twitter’s API, and we had a confirmation with Twitter be they know this, and they are not going to fix this — we will have to wait for the next iteration of the API.”

ADVERTISEMENT

Beyond Musk though, cyborg bots are blurring the line between bot and human and only making bot identification harder. These kinds of accounts are controlled by a human sometimes, but a bot at other times.

Say you generally tweet from your account, but then you go on vacation and don’t want to keep tweeting while you’re away. A bot can fix that and perform actions for you while you’re gone. But for machine-learning programs focused on account behavior, this very much so blurs the line between humans and bots.

“They’re much more difficult [to identify], and they are not that common yet, but they do exist,” said Carley. “And with all the new technologies that are coming out to do computer-assisted things, they should become increasingly common in the future”

As for that Twitter estimation of bots on the platform that Musk is disputing? Carley agrees with Musk that it’s likely to be higher.

Thanks to Lillian Barkley for copy editing this article.

  • Benjamin Powers
    Benjamin Powers

    Technology Reporter

    Benjamin Powers is a technology reporter for Grid where he explores the interconnection of technology and privacy within major stories.