Supreme Court to hear Section 230 lawsuit, could change the internet


The Supreme Court is hearing a lawsuit over Section 230, the law the internet loves to hate

The news: The Supreme Court said Monday that it will hear a case this term examining the scope of tech companies’ immunity under Section 230 of the Communications Decency Act. The law, passed in 1996, has for decades protected internet companies from lawsuits related to content users share on their platforms.

If the court breaks with the long-running legal interpretation of Section 230, it could fundamentally change how the internet — and particularly social media sites — functions.

The case: The lawsuit before the court, Gonzalez v. Google, was filed by the family of Nohemi Gonzalez, a 23-year-old American killed in an attack by the terrorist group ISIS in Paris in 2015. The family members allege that algorithms on Google’s YouTube site promoted videos featuring ISIS, arguing that that is not the type of conduct that Congress intended to shield with Section 230. (The family does not allege that Google or YouTube had any role in the Paris attacks.)

This is the first time the Supreme Court has grappled with Section 230. The court, whose latest term began on Monday, has not yet scheduled oral arguments for the case.


Politics: The statute everybody hates — for different reasons

The Supreme Court’s decision to take up the Gonzalez case comes amid ongoing controversy about how social media sites handle misinformation and hate speech. Conservatives allege the sites’ moderation policies reveal liberal bias, with many citing decisions by Twitter and Facebook to bar former president Donald Trump from their platforms. Liberals — including President Joe Biden — argue that Section 230 has allowed the proliferation of misinformation on social media without penalties for tech companies.

Legal challenges to Section 230 have been wending their way through the courts. One of the most prominent concerns Texas’ social media law, championed by Republican Attorney General Ken Paxton, which would prevent private companies like Facebook, Twitter or Google from taking down or banning posts based on political viewpoint. The law, which has not yet taken effect, would allow the Texas attorney general or individuals to sue companies with more than 50 million users for any alleged violations of the statute.

In the case of Gonzalez v. Google, the 9th Circuit Court of Appeals sided with Google last year, even as it raised concerns about Section 230 more generally.

Meanwhile, Justice Clarence Thomas has criticized the law and how it has been applied. When the court declined to take up a Section 230-related lawsuit last year, Thomas noted that the law has been interpreted as granting “sweeping immunity on some of the largest companies in the world” — and argued that it should be narrowed.

Thomas is part of the court’s newfound conservative majority, which has issued several recent verdicts that overturned long-standing legal precedent, including rulings ending the national right to abortion and limiting the regulatory power of federal agencies.


Business: Tech companies battle for their bottom lines and the future of the internet

Legal and technology experts agree that Section 230 has enabled the growth of the modern internet. Social media and newspaper comment sections, for instance, likely wouldn’t exist — or at least, not in the form we’re used to — without the liability shield the law affords to companies for statements users make on their platforms.

The law’s supporters also argue that it has helped promote and defend free speech on the internet, by allowing the development of sites where users can express themselves in comments, posts, videos and other means.

Tech companies have fought hard against any reinterpretation or revision of the hotly contested law that could upend their business models. In an appearance before two House committees last year, for instance, Facebook CEO Mark Zuckerberg suggested that the law could be limited to companies following “best practices.”

Members of Congress from both parties have offered a bevy of potential reforms. They range from something similar to Facebook’s plan — creating conditions by which companies can qualify for Section 230 protections — to carving out specific exceptions to the law.

One bill, the Earn It Act, co-sponsored by Sens. Richard Blumenthal (D-Conn.) and Lindsey Graham (R-S.C.), would force companies to “earn” Section 230 protections, forfeiting them for violations of laws related to online child sexual abuse material (CSAM), even though companies are already legally bound to report such content on their platform. But it is not clear whether the legislation has enough support to become law.

In 2018, President Donald Trump signed Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) into law, also know as FOSTA-SESTA, which curtailed Section 230 protections for civil and criminal cases involving sex trafficking or “facilities prostitution.”

Misinformation: On the internet, lies spread as easily as the truth

Section 230 shields companies from liability over content posted by their users. But it also allows them to take content down as long as they act in good faith.

That has placed the law at the center of the roiling debate in the United States over what constitutes harmful misinformation, especially on topics like politics and the covid pandemic, and whether tech companies are doing enough to address its proliferation.

Last year, for instance, Democratic Sens. Amy Klobuchar (Minn.) and Ben Ray Luján (N.M.) introduced legislation that would hold social media companies responsible for spreading public health misinformation during a federally declared public health emergency.

Online misinformation has proliferated during the covid pandemic — from false rumors about vaccine risks to pregnant people or young athletes to lies about the reliability of at-home covid tests. The problem is so bad that the World Health Organization deemed it an “infodemic.” Studies by public health experts have repeatedly shown that online misinformation has driven vaccine hesitancy during the pandemic, with sometimes devastating effects. And as Grid has revealed, social media sites have helped create the booming online market for fake U.S. covid vaccine cards.


But it’s not just covid misinformation at issue. The right-wing, conspiracy-rich, sometimes-violent movement known as QAnon was born in the online forum 4chan and it has spread through social media sites like Facebook and Twitter. Dozens of its adherents are now running for state or federal office.

Further reading:

Section 230: An Overview (Congressional Research Service)

Research note: Examining how various social-media platforms have responded to COVID-19 misinformation (Harvard Kennedy School Misinformation Review)

The law that made the internet what it is today (Washington Post)

Thanks to Lillian Barkley for copy editing this article.

  • Lauren Morello
    Lauren Morello

    Science Editor

    Lauren Morello is the science editor at Grid, handling coverage of science, technology, health and the environment.

  • Benjamin Powers
    Benjamin Powers

    Technology Reporter

    Benjamin Powers is a technology reporter for Grid where he explores the interconnection of technology and privacy within major stories.