
Thirty years ago today, Section 230 of the Communications Decency Act, a bill credited with creating the groundwork for the modern internet, became law and set off a chain of events that would make it a lightning rod for the techlash.
The statute has survived everything from the dot-com bubble to a Supreme Court challenge that struck down the surrounding text in the CDA. But as it marks this major milestone, Section 230 is facing what could be among its biggest threats to date, as prominent lawmakers plot to bring it down and a mountain of legal challenges give courts the chance to narrow its scope.
Section 230, once dubbed “the twenty-six words that created the internet,” reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, online platforms that host user-generated content can’t be held responsible for what those users choose to say on their platforms. Its “Good Samaritan” provision allows for those platforms to moderate content in good faith, shielding them from civil liability for blocking access to obscene, violent, or harassing content. The law does not shield platforms from claims under criminal law.
In the years since President Bill Clinton signed the broader Telecommunications Act of 1996, Section 230 has become practically a caricature. Depending who you ask, it may either be at the root of most harm perpetuated by social media platforms or the very thing keeping the internet afloat. What started as an extremely popular act to prevent a fledgling tech industry from being crushed under the weight of frivolous lawsuits about hosting user-generated content has become one of the most reviled laws among many members of Congress — some of whom voted for it in the first place.
“As minority leader in the house in 1996, I voted for it because social media platforms told us that without that protection, America would never have an internet economy. They also said that the platforms were just a dumb pipe that just carried content produced by others,” former Rep. Dick Gephardt (D-MO) said in a press conference last week that featured actor Joseph Gordon-Levitt alongside parents who lost kids following harms they say were facilitated by internet platforms, from sextortion to fentanyl poisoning. They were gathered to advocate for a bill introduced by Sens. Dick Durbin (D-IL) and Lindsey Graham (R-SC) to sunset Section 230 in two years, with the hope it will force lawmakers and tech companies to finally break the status quo and put them under pressure to come up with workable reforms.
At the time of his vote, Gephardt says, lawmakers had no idea what algorithms were and how they would come to capture people’s attention for hours on end and “brainwash” them. Armed with new knowledge about the technology, he says, it’s time to “correct the action that I and many others made 30 years ago.”
Now would be “the worst possible time to repeal Section 230”
Gephardt’s sentiment about Section 230 may be widely shared, but it also faces stiff opposition. Sen. Ron Wyden (D-OR), a co-author of the law whose name was attached to the amendment that would become known as Section 230, doesn’t see the law as a mistake. In fact, he tells The Verge in a phone interview, now would be “the worst possible time to repeal Section 230.”
“Trump and the MAGA billionaire cronies would be in the driver’s seat to rewrite our laws over online speech,” Wyden warns, saying it would be like “handing [Trump] a grenade launcher pointed right at people who want to have a voice.” While many Section 230 opponents think of platforms like Instagram and YouTube as the behemoth players they believe have benefited too long from the law’s protections, Wyden says platforms like Bluesky and Wikipedia, and groups that use social media to monitor the actions of ICE, would also suffer without it. Thirty years later, he says, “the law stands for exactly the same thing, which is: Are you going to stand up for people who don’t have power, don’t have clout, and are looking for a way to be heard? Because the people at the top, the people with lots of money, they’re always going to have ways to get their message out and to get their content out. And the First Amendment and what we’re talking about is a lifeline for folks of modest kind of means.”
Wyden recalls cooking up the text that would become Section 230 with former Rep. Chris Cox (R-CA) over lunch “in a little cubby where members of Congress could grab a sandwich and complain about the world.” The bill would help resolve a concerning trend that rose out of a pair of recent legal cases: Courts were finding that online platforms could be held liable for what users posted on them if they made any effort to remove or limit posts they found objectionable, but so long as they did nothing at all, they might escape accountability.
Today, Section 230’s defenders say that it’s necessary to incentivize tech platforms to do the basic moderation at scale that keeps them from becoming instant cesspools, and prevents them from being incentivized to remove posts that the government might take issue with. That’s become especially salient at a time when many tech executives have rubbed shoulders with President Donald Trump, settled lawsuits with him for millions of dollars, and updated their moderation standards once he resumed office. “What would the internet look like without 230?” asks Amy Bos, VP of government affairs at NetChoice, a group whose members span the tech industry, including Meta, Google, Pinterest, and Reddit. “It would force platforms, websites to remove third-party content. This is content created by everyday Americans.”
But opponents of Section 230 say that now deep-pocketed companies unfairly benefit from the protections once meant for a startup industry. In court, Section 230 often acts as a “do not pass Go” card on lawsuits brought against tech platforms. Few plaintiffs have managed to overcome that hurdle. But this year, several cases are going to trial that could reshape the outer bounds of Section 230’s broad protections. The cases, which include one against Meta brought by New Mexico’s attorney general for its alleged facilitation of child predators, and others brought by individual plaintiffs and school districts who say they were harmed by social media’s allegedly addictive designs, will give juries the chance to decide what constitutes a decision a platform can be found negligent for making, and what is protected speech under the First Amendment or covered third-party content under Section 230.
“I think we need to take 230 away, rewrite it to restart the clock”
The cases could create a chance for the Supreme Court to ultimately weigh in on the appropriate application of Section 230 in the modern day. At the press conference in support of sunsetting Section 230, Dani Pinter, chief legal officer of the National Center on Sexual Exploitation (NCOSE), tells The Verge that the way courts have interpreted the statute over the years is a large part of the problem. “Even with the language of 230 how it is now, I don’t believe they should be given immunity in some of the cases they are,” Pinter says of the tech companies. “I think part of it is judges and lawyers don’t necessarily really get how these tech companies function.” That created a dynamic that allowed the case law around Section 230 to take on “a life of its own,” according to Pinter. “I think we need to take 230 away, rewrite it to restart the clock.”
Wyden says he’s open to some targeted reform of the law, including around tech companies’ own product design choices, which is the issue at the center of the cases going to trial this year. He agrees with the outcome of a case against Snap that found Section 230 couldn’t bar the company from facing a lawsuit for allegedly encouraging reckless driving with its Snapchat speed filters. “We are not against talking about targeted changes, but my principles have been: It can’t target constitutionally protected speech, it can’t discourage moderation,” Wyden says. “And the bills I’ve seen violate one or both of these tenets.”
When The Verge asked Durbin during the press conference whether he saw validity in concerns that repealing Section 230 would make it more likely that tech companies would remove content that the administration might find objectionable, Durbin responded, “The only business enterprise in America which is held harmless from their own wrongdoing is Big Tech.” Though he said he’s “all for the Constitution and free expression … there are limits.”
Section 230 was last updated in 2018, with passage of the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), which carved out a new exception that removed liability protection for conduct that “promotes or facilitates prostitution,” or from facing civil or criminal charges of sex trafficking. The change helped lead to the shuttering of the classified advertising site Backpage.com, which was largely viewed as a victory by proponents. But sex workers said the absence of a system that let them vet potential clients more easily made them less safe. Three years after it was signed into law, a report from the US Government Accountability Office (GAO) found that the carve-out was used very rarely to bring cases in court.
Several grieving parents at the press conference have stumbled into Section 230 as they sought some form of justice for their children’s deaths. Kristin Bride’s son Carson died by suicide at age 16 after being cyberbullied on an app called Yolo, which was integrated into Snapchat and let users send anonymous messages. Bride described the “second darkest day” of her life, after the day her son died, as the one when an attorney told her that because of Section 230, she had no legal recourse against the social media platforms. Even though an appeals court eventually let Bride move forward with her lawsuit for product misrepresentation against Yolo and the case has continued, it’s not in the way she’d imagined. “I had wanted discovery, a jury, a trial, and an opportunity to face the creator of Yolo, Gregoire Henrion, and look him in the eyes and let him know how much his priorities for making fast money over kids’ online safety have destroyed our family,” she says. “But this will never happen. Because after years of Section 230 appeals, Yolo is now a shell without funding, unable to hire attorneys to defend the case.”
There’s one emerging area of tech that both Section 230’s authors and its fiercest critics agree shouldn’t hold water: AI. “The law plainly states that it does not protect anyone who creates or develops content, even in part–and generative AI applications such as ChatGPT, by definition, create content,” Cox and Wyden wrote in a 2023 op-ed in Fortune. In the AI age, lawmakers are replicating similar debates to those that took place 30 years ago about how to balance fostering a nascent industry while ensuring it doesn’t get to run wild. Ahead of the anniversary of Section 230, a coalition of groups that advocate for online safety measures for kids and AI urged Senate leaders “not to create a new shield for Big Tech by advancing legislation that would broadly preempt state artificial intelligence (AI) laws,” warning that repeated efforts to do so could “recreate the same dynamics that followed the passage of Section 230.”
