Twenty-Five Years Overdue: Reform Act 230 of the 1996 Communications Decency Act

By: Alex Herazy

I. Introduction and Overview of the 1996 Communications Decency Act

At the time of writing, yet another Facebook scandal just erupted—this time, the “Facebook Papers.” In recent months, explosive allegations from ex-Facebook employees turned whistleblowers, like Frances Haugen, have internally rocked the company. Haugen recently appeared on Capitol Hill to testify about her alleged charges of Facebook’s wrongdoing and deception. In addition to these allegations, Facebook is also battling an antitrust lawsuit filed by the Securities and Exchange Commission (SEC). For background, Facebook Inc. is the parent company of several major subsidiaries, most notably messaging app WhatsApp, social media platform Instagram, and virtual gaming company Oculus VR. In November 2021, as part of a rebranding effort, Facebook announced their company name change to Meta Platforms, representative of CEO Mark’s Zuckerberg desire to establish formerly-Facebook, now-Meta Platforms, as a leader in the metaverse space. As described by the Wall Street Journal, “The metaverse is an online virtual realm where people would work, play and shop. Facebook [Meta Platforms] describes it as “the next evolution of social connection.” While Meta Platforms has 1 been battling hefty legal and ethical turmoil in the past months, Meta Platforms could additionally soon be faced with reforms to the 1996 Communications Decency Act, specifically Section 230. The 1996 Communications Decency Act, Title V of the federally passed Telecommunications Act of 1996, aimed to regulate pornographic material on the Internet. Considering this law created at the inception of the Internet still governs the Internet we now know, Section 230 should be updated to reduce legal immunity for Big Tech companies like Meta Platforms providing platforms for speech violating the 1st Amendment and/or the own company’s terms of service.

II. Complications with Section 230

Section 230 of the 1996 Communications Decency Act covers two core subsections governing digital posts that users independently create and publish. The first, Section 230(c)(1), protects platforms from legal liability relating to harmful content posted on their sites by third parties. This law, under the subsection ‘Treatment of Publisher or Speaker,’ states “No provider 2 or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In addition to protecting 3 Internet service providers like Comcast, AT&T, and Verizon, joint social media - Internet companies like Google, Meta Platforms, or Instagram are also protected. While primarily 4 intending to regulate pornographic material online, 230(c)(1) provides a legal shield that modern day social media companies like Meta Platforms are employing to protect themselves from liability. As I will later discuss, this is immensely problematic, and Big Tech companies should hold an extent of liability for content on their websites in violation of the 1st Amendment, and in some cases, their own terms of service. The second, Section 230(c)(2), allows platforms to police their sites for harmful content, but it doesn’t require that they remove anything, and it protects them from liability if they choose not to.” This law, under the subsection ‘Civil Liability,’ states 5 two broad liability shields where “No provider or user of an interactive computer service shall be held liable on account of.”6 Any analysis of these findings must also consider the time in which this Act was published: 1996. As stated in the “Findings” reason behind this Act, Congress found “The rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens.” While well-intended, the lawmakers who created and 7 eventually passed this Act understandably could not envision the exponential rise and growth of technology as we now see today. While these lawmakers understandably were in a difficult position creating laws for a rapidly changing area, the fact that this 25-year-old legislation is still governing the modern state of the Internet poses obvious problems. Inherently, the Internet has changed drastically since 1996. As we now see today, the potential ramifications of maliciously used technology poses a threat to the rule of law and collective system of democracy at home and abroad. Also relevant to the discussion of Section 230 is the role of 1st Amendment Freedoms of Speech and Press. Though Supreme Court decisions have placed certain limitations on the freedom of speech (e.g., the clear & present danger doctrine), generally, freedom of speech is interpreted to be quite expansive. Additionally, the Freedom of the Press applies the principles of the Freedom of Speech to communication and expression through media, therefore finding relevance when in conversation with online liability. However, certain forms of speech have been ruled to violate the First Amendment, like threats against another’s life, speech advocating for illegal action, ‘fighting words’ that encourage violence, obscenities, and certain defamatory speech. Though Section 230 currently protects social media companies from liability over their users’ posts, I feel social media companies should bear some extent of liability. Considering the countless elite computer programmers and tech-savvy interns at Meta Platforms, how hard would it be for them to create an algorithm that scans their platform for posts that have words and phrases commonly used in threats (“I want to kill you”), for example? Additionally, there remains an intricate and delicate balance of liability especially when focusing upon the niche subject of Internet conspiracy theories and their real-life consequences. Central to Meta Platforms is their promotion of ‘Groups’ centered around a shared value/interest/activity/passion, ranging from anything like Groups “New England Patriots Fans” to “Believers in Aliens.” With this feature, there remains the pressing concern that those with radical, extremist views may create/join a Group and reaffirm the validity of others’ extremist views. Sociologically, finding others with shared interests/values/goals can lead to obvious benefits (e.g., friendships forming between people committed to community service) yet also obvious consequences (e.g., radical conspiracy theorists whose ideologies translate into real-world consequences). Under Section 230, Meta Platforms enjoys a broad legal shield protecting them from liability over their users’ posts. This is a grave problem: Meta Platforms can facilitate the creation and growth of conspiracy theories that can bear real-world consequences, yet hold immunity from Section 230 regarding their users’ posts and engagements. From Groups, Facebook gains increased user engagement and hence profits, yet holds none of the liability from potential civil and criminal fallout. 75 In major headlines in recent years was the bizarre Pizzagate scandal, where a man believed online conspiracy theories he read regarding a Washington D.C. pizza shop ‘enabling a pedophile ring led by Hillary and Bill Clinton’; as a result, in December 2016, he walked into the shop with an assault rifle demanding to investigate these claims. As bizarre as this story may sound, it 8 represents one of many far-fetched Internet conspiracy theories swirling in the dark pockets of the Internet. The balance between banning or censoring this content, infringing upon 1st Amendment Rights, and recognizing real-world implications is a complicated debate. Situations like Pizzagate represent what happens when online media has real world consequences, further proving the seriousness and life-altering consequences of § 230.

III. History of Meta Platforms

Though Meta Platforms currently holds a $948 billion market cap (total value of outstanding shares)—making it the sixth most valuable publicly traded company— a few decades ago Meta Platforms was nothing more than an emerging start-up in Mark Zuckerberg’s Harvard dorm room. In the early 2000s, media platforms like Meta Platforms, Twitter, LinkedIn, and MySpace all joined the quickly growing digital world. Arriving on the Internet in 2004, Meta Platforms was initially started as an online university student connection board and Meta Platforms then became open to the public in 2006. Within two years, the site boasted 100 million users, and by 2012, Meta Platforms had one billion users. In addition to their position as a hub for facilitating social relationships, core to Meta Platforms’s business was their promotion of relevant news and current events in tabs like ‘Instant Articles.’ In 2018, a pew-research survey found that four-in-ten U.S. adults considered Meta Platforms a “pathway to news.” Given the ubiquity of 10 social media, there have been questions regarding the extent to which the government should regulate tech companies like Meta Platforms, in addition to questions regarding the extent to which these companies should be liable for content on their platforms.

IV. Meta Platforms Controversy

Meta Platforms’s controversial involvement with politics can be found surrounding the controversy with the 2016 and 2020 Presidential campaigns. Detailed in the Mueller Report, Russia had established a government department known as the “Internet Research Agency (IRA)” whose sole purpose was to manipulate social media for the benefit of President Vladimir Putin and the current Russian government. Both the New York Times and the Mueller Report 11 describe the IRA as an agency focused on spreading disinformation across social media through fake user accounts, spam comments, and doctored news articles. The IRA promoted media ranging from anti-American, anti-Western propaganda to the appearance of popular support for Russian President Vladimir Putin. Later in 2020, widespread conclusions by US intelligence agencies found that a Putin-ordered digital campaign aimed to undermine the credibility of then-candidate Joe Biden and promote then-incumbent President Trump. With the current 12 protections of Section 230, Facebook held a legal shield that protected them from legal fallouts of being the medium in which Russian interests were advanced; hence, this demonstrates why Section 230 urgently needs to be updated to reduce legal immunity for companies like Facebook propagating foreign democracy-attacking campaigns. Without reform, the only protection 12 Americans have as defense against this manipulation is the patriotism and ethics of Meta Platforms officials placing democracy over profit, which clearly is unreliable. At the time of writing, the Facebook [Meta Platforms] Papers leaked by whistleblower Frances Haugen present leaked internal documents and damning allegations regarding Meta Platforms’s action—or rather, lack thereof— regarding real-life consequences of their platform. “Haugen has leaked one Facebook [Meta Platforms] study that found that 13.5% of U.K. teen girls in one survey say their suicidal thoughts became more frequent after starting on Instagram. Another leaked study found 17% of teen girls say their eating disorders got worse after using Instagram. About 32% of teen girls said that when they felt bad about their bodies, Instagram made them feel worse, Facebook’s researchers found, which was first reported by the Journal.”13 Additionally, Haugen stated in her findings that, “Facebook [Meta Platforms] exploited teens using powerful algorithms that amplified their insecurities.” In addition to past controversy 14 surrounding Meta Platforms’s role with Russian interference in the 2016 and 2020 Presidential elections, Haugen’s leaked findings demonstrate the problematic nature of Section 230 as it stands today. Here, Meta Platforms actively researched the consequences of their services, and reached horrific findings that should scare every Meta Platforms user; yet Meta Platforms bore no, and I cannot stress enough, no legal responsibility to act upon these findings. Through Section 230(c)(1), social media companies bear no liability for user-generated content on their platforms explicitly advocating for atrocities like self-harm or body shaming. Through 230(c)(2), social media companies are permitted to regulate their site for harmful content yet have no requirement to actually remove anything. An updated and reformed version of Section 230 would force companies with platforms harming their users to actually act, the necessary clause absent in the current state of Section 230.

V. Relevant Case Law

Shortly after its passing, Title V of the Telecommunications Act of 1996, or the 1996 Communications Decency Act, was challenged in court. The case ascended to the Supreme Court, who decided Reno v. ACLU in 1997. Per Oyez, judicial archive of the Supreme Court of the United States, the core question the court decided was “Did certain provisions of the 1996 Communications Decency Act violate the First and Fifth Amendments by being overly broad and vague in their definitions of the types of internet communications which they criminalized?” In their unanimous decision, the court held certain provisions in question indeed violated the 15 First Amendment due to their broad applicability, and “failed to clearly define ‘indecent’ communications, limit its restrictions to particular times or individuals (by showing that it would not impact adults), provide supportive statements from an authority on the unique nature of internet communications, or conclusively demonstrate that the transmission of "offensive" material is devoid of any social value.” This served as an immediate blow to § 230 that 16 immediately weakened its powers and jurisdiction over online regulation. After the Reno decision, Congress drafted another online pornography law named the Child Online Protection Act (COPA) of 1998 that has been historically crippled by judicial challenges.17 In recent news, a June 2021 ruling by the Texas Supreme Court held Meta Platforms “was not shielded by Section 230 for sex-trafficking recruitment that occurs on its platform.” The 17 court’s decision included the reasoning, “We do not understand Section 230 to ‘create a lawless no-man’s-land on the Internet.” This case represents a major blow to the immunity § 230 grants 18 and may open the door for future legal challenges copying elements of the prosecution’s strategy and overall approach to this lawsuit. In my view, this case is a step in the right direction, and I hope judges continue to align with this thinking. Section 230 has been a popular target among leading politicians, including former President Trump, current President Biden, and Florida Governor Ron DeSantis. In January 2020, then-Presidential candidate Joe Biden called for Section 230 to be “immediately revoked” in an interview with the New York Times editorial board. In May 2020, Former President Trump stirred debate over Section 230 upon his issuance 19 of an executive order demanding the Federal Communications Commission (FCC) “establish regulations that clarify the parameters of the good-faith effort Section 230 requires online companies to make when deciding whether to delete or modify content. At the heart of Trump's executive order was the claim that social media sites censor conservative viewpoints they disagree with.” While the motives behind this Executive guidance were mentioned in the quote, 20 the fact that former President Trump and now President Biden have made public comments regarding § 230 demonstrate the national spotlight upon its importance within our legal system. Also in major news was the May 2021 law Governor Ron DeSantis proposed forbidding media platforms from censoring conservative politicians. However, federal Judge Robert Hinkle blocked this legislation per violations of § 230 and the First Amendment. Though the 1997 decision in Reno v. ACLU immediately weakened the provisions of § 230, in the last few years, recent challenges to § 230 have found both successes and failures. V. Exploring Policy Proposals As I have argued throughout this article, § 230 is severely outdated and presents an urgent need for reform. So, why is there such an urgent need to reform Section 230? Without reform, dangerous misinformation will continue chipping away at the pillars of our United States democracy, whether that be through targeted Meta Platforms campaign ads from foreign adversaries or domestic radical conspiracy theories that cause real-life consequences. Calls to reform § 230 have been met by overwhelming bipartisan support, including joint agreement among Republican Senator Jerry Moran of Kansas and Democratic Senator Richard Blumenthal of Connecticut. Though not an explicit proponent, even Meta Platforms CEO Mark Zuckerberg 21 told Congress it “may make sense for there to be liability for some of the content,” and that Meta Platforms “would benefit from clearer guidance from elected officials.”22 As suggested in the Harvard Business Review, “Other legislative responses could include passing a national privacy law and strengthening safeguards for children online, two measures that have been long-debated among Washington lawmakers.” Other core tenets would increase 23 public transparency regarding how these social media platforms are—or for that matter, aren’t—regulating potentially dangerous content. Per the Brookings Institution, a variety of bills are being proposed to push different agendas regarding § 230 reform: “Republican-sponsored bills include the Abandoning Online Censorship Act sponsored by Rep. Louie Gohmert (R-Tex.) 23 Ibid. 22 and the 21st Century Foundation for the Right to Express and Engage in Speech Act sponsored by Sen. Bill Hagerty (R-Tenn.), both of which seek to repeal Section 230. Meanwhile, Democrats have proposed bills such as Protecting Americans from Dangerous Algorithms Act, sponsored by Reps. Tom Malinowski (D-N.J.) and Anna Eshoo (D-Calif.), which create legal liability for platforms that host content that violates civil rights. At the same time, the bipartisan bill, See Something, Say Something Online Act of 2021, sponsored by Sens. Joe Manchin (D-W.V.) and John Cornyn (R-Tex.), seeks to place greater accountability on tech companies for content that violates the platforms’ terms of service.” As indicated by the varying titles of these 24 bills, differing political and ideological viewpoints have led the diverse array of lawmakers to push different reform agendas. Some bills would directly repeal § 230, some would add supplemental associated protections and statutes concurrent to § 230, and some overhaul Section 230’s legal structure and clauses. Regardless of the bill, a gray area exists surrounding the freedoms of speech on a digital medium, limitations considering the real-world consequences of digital speech, and role of the government when regulating Big Tech companies. While the flurry of bills takes different approaches to the issue of § 230 and the collective digital freedom of speech, there is undeniable momentum and desire to enact change.

VI. Counterarguments

Supporting Section 230 However, proponents of § 230 argue its current terms enable innovation by protecting and promoting small business growth. Unlike the vast financial and technological resources of Big Tech companies, smaller tech companies (especially infant startups) fundamentally do not have 24 the same resources to protect their websites. So, proponents feel any changes to § 230 would 25 cause ramifications for the innovation side of emerging tech companies burdened with immense liability for their platform’s content. Another common counterargument concerns free speech through digital censorship and how reform may infringe upon 1st Amendment Rights. Groups like the Electronic Frontier Foundation and Fight for the Future feel that reform to § 230 would incentivize social media companies to favor censorship over free speech in moves to minimize legal risk. In the case that there was greater liability for the content on a platform, these groups 26 feel there would be a natural inclination to side with protecting the bottom line and avoiding costly lawsuits even though this may jeopardize the expansive Freedom of Speech. Notably, reputable pro-civil liberties organization American Civil Liberties Union’s (ACLU) § 230 stance advocates vehemently for its defense, citing the promotion of user free speech. While 27 lawmakers have proposed a diverse set of bills to reform Section 230, opposition from certain freedom groups like the ACLU or Electronic Frontier Foundation remains fierce.

VII. Conclusion and Personal View

Personally, I feel an outright repeal of Section 230 would be too extreme, and would lead to a lawless, unimaginable Internet; though I don’t agree with the current state of Section 230, repealing it would be wrong. Instead, I feel the most crucial element to reform is the transference of some extent of liability for user-generated content posted on a certain site (e.g., Meta Platforms) onto the platform. This would legally force Big Tech to evaluate whether speech on their site violates the 1st Amendment and/or their own terms of service, and take appropriate 27 judicial action if questions arise. As it stands, Big Tech is immune from legal action deriving from the content of their users’ posts; given the technological advancements in artificial intelligence and creation of high-powered algorithms, I feel Facebook certainly has the tools to better regulate speech on their platform yet lack the legal impetus to actually do so. Though opposition may appear fierce, ultimately, the fact a law created 25 years ago to regulate the Internet is still governing the Internet as we now know it poses an inherent problem. While I will let debate regarding the exact provisions of laws reforming and/or replacing the outdated § 230 play out on the Hill, bipartisan agreement emphasizes now is the time to reform the outdated Section 230.

Testing Jackson’s Hypothesis

20,000 Leagues Under the Sea: Who Should Adjudicate Polymetallic Nodule Mining Disputes?