Elon Musk’s pending purchase of Twitter has sent tremors through politics, activism and journalism circles as people try to understand what implications Musk’s freewheeling personality will have for one of the world’s most influential social media platforms.
Musk’s purchase already has drawn the scrutiny of regulators in the European Union, who are finalizing landmark legislation to address harms caused by social media — including disinformation and hate speech. One of the bloc’s top regulators fired a warning shot Monday on Twitter over Musk’s calls to relax speech restrictions on the platform, without offering details or acknowledging differing national policies and cultural norms.
“Be it cars or social media, any company operating in Europe needs to comply with our rules — regardless of their shareholding,” tweeted Thierry Breton, the European Union commissioner in charge of the group’s internal market. “Mr Musk knows this well. He is familiar with European rules on automotive, and will quickly adapt to the Digital Services Act.”
Grid spoke with Karen Kornbluh, a former ambassador to the Organization for Economic Cooperation and Development and former tech policy official at the Federal Communications Commission who now directs the German Marshall Fund’s Digital Innovation and Democracy Initiative, about her reaction to the pending purchase — and how regulators around the world might be viewing the potential fallout.
ADVERTISEMENT
This interview has been edited for length and clarity.
Grid: Give me your initial reflections on the Musk deal.
Karen Kornbluh: If you think that the big problem that we’re facing is extremism and violent extremism undermining democracies, just take a look at the front page of the FBI’s website, where they talk about online organizing by violent extremists internationally and domestically and how dangerous that is. Then you have to think that this [purchase] is a bad thing.
The [social media] platforms’ incentives are to gin up fear and excitement, and that is leading and allowing extremists to recruit and radicalize and organize online. And then the platforms reluctantly play Whac-A-Mole to take stuff down after the fact if it’s a brand issue or if it’s really a public safety or public health issue. Musk seems to be saying, “Not only do I not want to change, I want to change the design of the platform to gin up all that dangerous stuff — but I want to do less Whac-A-Mole after the fact.”
What you see in the dialogue that is going on inside Facebook with the whistleblower who raised the hood, there are these debates about, how do you change the design? Or for regulators, have you changed the incentives so that [the company] will change the design so that you have less manipulation, less extremism online, and it’s not just a game of Whac-A-Mole? Musk is not part of that debate. He’s not interested in that debate. He’s missing the iceberg. He doesn’t seem to care about the iceberg. And the iceberg is this rise of extremism.
ADVERTISEMENT
G: What are some of the points of conflict you see arising between Musk and regulators, particularly in the EU?
KK: One of the things that’s hard to understand is that in the U.S. and the EU, there’s a real concern about bending over backward to make sure there’s no constraint on freedom of expression. That’s why you’ve seen people be so slow to do any kind of regulation. The Digital Services Act (DSA), which was recently passed in the EU, is this complicated construction of a policy that’s designed to try to reduce the amplification of speech that’s illegal — which is what Musk tweeted, that he just wants to take down stuff that’s illegal. But Musk is going to be the first test for the DSA, and the DSA is the first test for Musk.
What the DSA says is, “OK, we’ve had this system that said if there’s illegal speech and when you get notice of it, you have to take it down.” That isn’t going to be enough. This stuff is still getting amplified. So the EU wants platforms to have to do a risk assessment to figure out why they’re amplifying this dangerous, illegal stuff and put in place mitigation. Now, this is just the largest platforms, and the EU will do some outside audits on how these mitigation measures are going. The public is going to get to see the data, and governmental agencies are going to get to see the data.
It’s a really complicated system, but it’s designed to leave the platforms with their hands on the wheel and to deal with illegal speech — which is exactly what Musk says he’s concerned with — and how it’s amplified. But it’s complicated, and it relies on this interplay between auditors and platforms and the regulators. Musk has said he’s going to take down what’s illegal, but what we’ve seen from how he deals with governments and agencies, whether it’s the Securities and Exchange Commission, or the National Highway Safety Administration, or the National Labor Board, we’ve seen him step over the line and then dare the regulator to enforce their rules. So that is going to be really interesting.
G: Different countries have different views of free speech. In some, like Turkey or Saudi Arabia, there are laws around censorship. How do you think Musk will deal with this?
KK: If you take him at his word, it will mean more extremism in democracies, because those governments are not going to say “take it down” unless it is a direct incitement to violence — or, for example, hate speech in Europe is illegal. And then in countries that control free speech where they have laws, where they are demanding he remove dissident content, he says he’s going to obey that. So, if you take him in his word, it’s going to mean more oppression in repressive governments and more extremism in democratic governments.
G: Are there other countries you’re keeping an eye on?
KK: The U.K. The U.K., separately from the EU, has this new law called the Online Safety Bill. What this and other bills try to do is getting the platforms to deal with the risks of their design issues. So they’re not laws that are focused on individual content, because democratic governments do not want to be in the business of saying “this is a problem.” They want to be saying it’s a product safety issue, and how do you design your product so it’s not going to lead to public safety issues?
The Online Safety Bill started out being focused on kids and harms to kids, but it’s been expanded. What they’ve said they really want to do is have a duty of care for platforms such that they act as fiduciaries and that some of the libel laws can come into play. Just like if you don’t shovel your walkway, you can be sued. Here, if you don’t clean up some of the really risky practices, then you get into trouble. So in Europe, they’re pretty serious about it. And they’re really surprised. They’re looking at the U.S., and they’re like, why do we have to regulate your companies?
Thanks to Lillian Barkley for copy editing this article.