On Wednesday, Google (GOOG, GOOGL) CEO Sundar Pichai, Facebook (FB) CEO Mark Zuckerberg, and Twitter (TWTR) CEO Jack Dorsey testified before the Senate Commerce Committee on Section 230 of the Communications Decency Act, which gives websites that host user-generated content broad protection from legal liability for content posted on their sites.
“From our perspective, Section 230 does two basic things. First, it encourages free expression, which is fundamentally important,” Zuckerberg told the committee. “...Second, it allows platforms to moderate content. Without 230, platforms could face liability for basic moderation.”
Section 230, which for years was largely unknown outside of tech and policy wonk spaces, serves as a foundational piece of the internet and its goal of protecting free expression. In 2018, President Donald Trump signed a law weakening some of Section 230’s protections to allow victims to sue websites that knowingly facilitate sex trafficking.
Now Trump, as well as Joe Biden, want to kill the law completely, albeit for different reasons.
Signed into law in 1996, Section 230 was created to enable online platforms to make “good faith” efforts to moderate user-generated content deemed “objectionable” without facing legal liability over that content.
Republicans and Democrats want changes
Trump charges that the law allows Big Tech to silence content with impunity, while certain Democrats including Biden say it allows the companies to spread false information with ease. While those arguments are at the center of a fierce debate over Section 230’s fate, the law has supported the growth of many companies consumers rely on today.
The goal of the law was to let message board moderators or large companies remove problematic user content from their sites, without treating them as though they were either making the actual statements, or making editorial decisions akin to a media publication. Without Section 230, community and social platforms ranging from Yelp (YELP) to Facebook to virtually any website with a comments section could face huge legal liabilities for anything posted on their sites.
But the vagueness of the terms “good faith” and “objectionable” in the law have translated into websites — and of particular concern, social media websites such as Facebook, Instagram, Twitter and Google-owned YouTube — enjoying virtually unlimited power to remove, obscure, and place warnings on user-generated content. At the same time, the law does not hold these tech giants accountable for the content they fail to remove.
While the Constitution’s First Amendment protects the speech of these private companies, as well as individuals, lawmakers on both sides of the aisle have taken issue with the broad legal immunity tech giants enjoy under Section 230. And lawmakers blame Section 230 for allowing social media companies to moderate content too aggressively in the eyes of some Republicans — or not enough, from the perspective of some Democrats.
How we got here
Trump and conservative lawmakers began piling on Section 230 when sites like Twitter and Facebook added their own fact checking mechanisms to user posts, and limited the reach of user tweets and posts that violated the companies’ respective terms of services.
Trump’s first “fact checked” tweet was one sent from his handle in May claiming that mail-in voting was rife with fraud and would lead to a “rigged election.” Twitter, which adopted a policy to not delete tweets from elected officials, added a mark underneath the policy-violating tweet that linked users to news articles and other information rebutting Trump’s position.
Three days later, following the killing of an unarmed Black man named George Floyd, Trump tweeted “when the looting starts, the shooting starts.” The president’s tweet referred to protests that erupted following the death of Floyd after a police officer knelt on his neck for nearly nine minutes.
Twitter, in response, took the unprecedented step of placing Trump’s tweet behind a warning message stating the post violated its terms of service against inciting violence.
The next day, Trump issued an executive order demanding that the Federal Communications Commission and Federal Trade Commission reevaluate Section 230.
Trump’s executive order
By attempting to weaken the law’s safeguards, Trump’s executive order is designed to make it tougher for social media platforms to edit or delete user content, especially politically charged material. Because political speech is one of the most highly protected types of speech, legal scholars have argued that the president’s attempts to amend 230 through the executive branch would likely be interpreted by courts as an unconstitutional restraint.
Both Twitter and Facebook have continued to take action against Trump’s and his campaign’s tweets and posts since. In early October, Facebook took down a post by Trump that claimed that the seasonal flu was more dangerous than coronavirus.
Much of the conversation from conservative lawmakers has revolved around the idea of Section 230 breeding anti-conservative content moderation from Big Tech. That accusation, often cited as unproven by progressives, is also cited as unprovable by conservatives, who criticize tech companies for resisting transparency about their algorithmic and content moderation practices.
Still, in some instances, conservative voices on Facebook have spread misinformation without facing penalties, NBC News reported on Aug. 7, citing leaked internal documents from the social network.
Wednesday’s hearing before the Senate Commerce Committee comes after the tech CEOs did not agree with repeated requests to testify before the Committee on Section 230. The CEOs were also asked by the Senate Judiciary to testify about the law after Facebook and Twitter curbed distribution of a controversial New York Post story accusing Biden of lying about conversations with his son, Hunter Biden, concerning Hunter’s overseas dealings. Twitter CEO Jack Dorsey later said that Twitter’s initial decision to block the story was “wrong.”
To curb the spread of such misinformation, some liberal lawmakers have been pushing for separate changes to Section 230 that would hold online platforms more accountable for false or misleading content published on their platforms. Facebook famously received criticism for allowing Russian propaganda to spread across its platform in the lead-up to the 2016 election, which pushed Democrats to call for the social network to be held accountable for the content it allows to be disseminated.
In June, Sen. Ron Wyden, a Democrat from Oregon who co-wrote Section 230, told the Aspen Institute that social networks aren’t doing enough to take down certain types of content. While Section 230 gave tech giants a “shield” from liability, it also gave them a “sword” that allows them to take down “slime” and other types of offensive content, Wyden has written.
“I also think apropos of 230, that the big tech companies are not doing enough to take down the slime online,” Wyden told the Aspen Institute. Still, Wyden wants to preserve the law insofar as it gives a voice to those with less clout than big corporations.
“Social media has been a huge megaphone for people without access to money, who have trouble accessing conventional media or who want to challenge those in power, but I think it's also important to recognize that big players, the big ones with deep pockets, haven't done enough.”
For now, Section 230 remains the law of the land. Regardless of which party comes out ahead in the upcoming 2020 elections, the law still appears staged for changes in the coming years.
Editor’s note: This post was updated on Wednesday with comments made before the Senate Commerce Committee.
Alexis Keenan is a legal reporter for Yahoo Finance and former litigation attorney.
Follow Alexis Keenan on Twitter @alexiskweed.
Got a tip? Email Daniel Howley at email@example.com over via encrypted mail at firstname.lastname@example.org, and follow him on Twitter at @DanielHowley.