The Not-So-Good-Samaritan: A Commentary on Section 230(c) of the Communications Decency Act

On the 19th of February 2025,  the United States Senate Committee on the Judiciary held a hearing titled “Children’s Safety in the Digital Era: Strengthening Protections and Addressing Legal Gaps.” The session addressed a range of child safety issues, including sexual exploitation and the role Big Tech plays in [not] mitigating these risks. Particularly, the policymakers and witnesses at the hearing highlighted the now-popular but perhaps unintended use of Section 230 of the Communications Decency Act – known by Big Tech as a policy shield to absolve themselves from liability and reject accountability in the facilitation of child online exploitation. 

Let’s Get Into It: Introduction

In 2023 alone, the National Centre for Missing and Exploited Children’s CyberTipline® received 36.2 million reports of suspected child sexual exploitation online, reporting that online enticement increased by more than 300% between 2021 and 2023. 

This is a global problem. On 28 February 2025, Europol reported a large-scale hit of 25 arrests in support of authorities in  19 countries against child sexual exploitation originating from one online platform that distributed AI-generated CSAM. These child sexual abuse materials depicted children being abused. 

One of the biggest challenges for investigators, advocates, and prosecutors in curbing and addressing these increasing harms to children online is the lack of sufficient legislation, and in the case of the United States, a counterproductive legislative provision frustrates the work of advocates. 

Section 230(c)(1)&(2) of the Communications Decency Act (CDA) says:

(c) PROTECTION FOR “GOOD SAMARITAN” BLOCKING AND SCREENING OF OFFENSIVE MATERIAL.— 

(1) TREATMENT OF PUBLISHER OR SPEAKER.—No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another content provider. 

(2) CIVIL LIABILITY.—No provider or user of an interactive computer service shall be held liable on account of— 

(a) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or 

(b) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in [subparagraph (A)]

Essentially, this provision grants broad legal immunity to interactive service providers.”

Historical Context of Section 230

The CDA was enacted way back in 1996 as part of the Telecommunications Act to address modern challenges that had arisen in the use of the telephone. 

Section 230, which has twice been amended, was created to foster the development of the internet by removing factors that may disincentivise technologists from developing innovative products and encouraging the creation of blocking and filtering technologies to restrict access to inappropriate material. 

A year before the enactment of the CDA, the US Supreme Court had held in Stratton-Oakmont v. Prodigy that a service provider was liable for speech appearing on its service because it generally reviewed posted content. And so, Section 230 served to overturn the  Supreme Court by clarifying that website operators and Internet service providers are not to be considered “publishers” of third-party content because they merely provide a platform for their users.

So, on the one hand, this provision protects computer service providers (such as social media companies) in America from being held liable for restricting access to objectionable material (i.e. what is now referred to as content moderation and takedown), and on the other hand, distinguishing those who create content from those who publish/provide access to it and providing immunity for the latter. 

Section 230(c) is called the Good Samaritan Provision because it assumes the good faith of computer service providers in voluntarily moderating content – just like the Good Samaritan helped without fear of legal consequences (that is, imagine that the parabolic Good Samaritan was arrested for helping his neighbour).

The Spirit of this law is obvious – to create an environment of innovation where platforms are not liable for third-party content and have free rein to moderate the same content in good faith. At first glance, this seems simple and straightforward – protecting innovation while ensuring responsible moderation. But in reality, it has fueled Big Tech’s unchecked power.

How Big Tech Exploits Section 230 to Avoid Accountability

Could the policymakers in 1996 have imagined that the ‘Good Samaritans’ they sought to protect would stand by as victims were brutalised? Maybe not. The reality is that old laws are expected to crack under the pressure of the ever-increasing sophistication of technology. 

The problem with Section 230(c), in my opinion, is first the assumption of the existence of good faith and then, the assumption of the sufficiency of good faith. It is 2025, and this provision which was to foster innovation, has now been weaponised to ensure a lack of accountability and transparency in innovation. The courts have interpreted Section 230 to prevent a wide range of lawsuits and to preempt laws that would make providers and users liable for third-party content.

In Zeran v. America Online, Inc. (1997), the courts said, “Lawsuits seeking to hold a service liable for its exercise of a publisher’s traditional editorial functions – such as deciding whether to publish, withdraw, postpone or alter content – are barred.” 

In Jane Doe v. America Online, (2001), – a case that involved child sexual exploitation on AOL’s chat room – the courts held that CDA 230 preempted state tort claims and that AOL was immunized by CDA.

In Herrick v. Grindr, LLC (2019),  the courts held that an internet-based dating application was a service provider and immune from liability for a user’s harassing conduct on the application

US Solicitor, Carrie Goldberg, who specialises in representing victims of sexual abuse, child exploitation, online harassment, and other forms of digital abuse, revealed in her witness testimony at the hearing, “In all of my cases, tech [companies] has two main defences, Section 230 and that they didn’t know.”

What was supposed to promote responsible content moderation has now become a weapon for irresponsible moderation. Especially with children’s online sexual exploitation, online service providers are being criticised for not only turning a blind eye to the proliferation of obscene and abusive content and activity on their platform but also turning a deaf ear to direct appeals for stricter content moderation to protect children. And why? Critics cite commercial profit as the chief reason why platforms would rather children and young people face gross abuse than create effective practices to truly be ‘Good Samaritans’. 

As Senator Chuk Grassley opined, “These tech platforms generate revenues that dwarf the economies of most nations. So, how do they make so much money? They do it by compromising our data and privacy, and keeping our children’s eyes glued to the screens through addictive algorithms.”

Most of the atrocities against children are open secrets to platform service providers. In Doe v. Twitter, the plaintiff’s 13-year-old son’s nude images had gathered 167,000 views and 2,000 retweets when she escalated it to Twitter, demanding that the material be taken down. In response, Twitter confirmed that it, indeed, had reviewed the content but still retained the content on its platform. 

Blanket immunity to service providers has deprived parents and caregivers of the right to seek justice for their children who have been exploited online. As echoed by Representative Brandon Guffey, father to Gavin Guffey, whose unfortunate self-assisted demise was triggered by online sextortion, “…Section 230 will go down as one of the greatest disasters, allowing Big Tech to run rampant without repercussions.”

Reforms

Any reasonable observer can conclude that there are significant and problematic loopholes in Section 230 and they have impacted the progression of justice for children as internet users. It has put more power in the hands of those already powerful and created a deficit of power for parents and caregivers who are already grappling with 

Of course, there are interpretative opportunities to bypass this immunity as the courts did in Anderson v. TikTok Inc (2024), where the Court of Appeals rejected immunity and held that TikTok’s algorithm – which recommended the Blackout Challenge to the deceased 10-year-old –  was TikTok’s “expressive activity” and hence, first-party speech. 

Calls have been made for more cogent reforms. Many of the proposed reforms can be categorised under the following proposals

  1. Conditional Immunity: That is, there should be conditions that, if adhered to, allow platforms to take advantage of Section 230. Conversely, platforms should not be able to claim immunity under Section 230 where these conditions have not been met. For instance, the EARN IT Act (Eliminating Abusive and Rampant Neglect of Interactive Technologies Act) proposed that platforms be required to take “reasonable measures” to detect and prevent child sexual abuse material (CSAM) to qualify for Section 230 immunity
  2. Partial Repeal: That is, remove platform immunity (C-1) for third-party content while preserving their ability to moderate content without fear of lawsuits (C-2). As Mary Leary said at the hearing, “I think the key thing here is to keep the good Samaritan protections that Section 230 has but to get rid of the C-1 protections that have so distorted this incentivization for harm”
  3. Narrowing Section 230: That is, the protection of Section 230 should not apply to certain categories of harmful or illegal content. For instance, the SAFE TECH Act proposed that harms occasioned via ads, paid content, and cover issues such as wrongful death and cyberstalking should not fall under the protection of Section 230. 
  4. Increasing Transparency & Reporting Requirements: That is, platforms should be held to higher standards in disclosing how they moderate content and what their moderation policies look like. This would include releasing data on content removals, having explainable algorithms and publicly sharing data on moderation decisions. For instance, the Platform Accountability and Transparency Act proposes access to researcher-specific data and information reportage on viral content, algorithmic design, ad libraries and content moderation. 
  5. Complete Repeal: Of course, there is also the reform proposal to repeal the provision completely. 

Conclusion

Critics of the reform have argued that removing 230(c)(1) protections would lead to excessive lawsuits and force platforms to over-censor content to avoid liability. While concerns about over-censorship are valid, the current reality is that platforms already censor selectively, often removing controversial political speech while allowing harmful child exploitation content to persist. Reforming Section 230 would not limit free speech but instead ensure that companies take responsibility for the dangers they knowingly allow. 

Also, they must consider that platforms don’t just host content they actively amplify it through algorithms designed for engagement. The Anderson v. TikTok ruling recognised this distinction, signalling a potential path for courts to treat algorithmic promotion as first-party speech, thus eliminating Section 230 immunity

You have to understand that what is being demanded by many critics and multiple advocates across different stakeholder groups is for parents to simply have an audience in court to challenge platforms and hold them accountable. Advocacy around the reform of section 230 does not even contemplate the merits of the case or necessarily argue the substantive liability of platforms. It really is a conversation around opening the doors for the aggrieved – children and their parents – to bring powerful tech companies under the jurisdiction and powers of the courts to, as it were, answer for themselves in their role as publishers and enablers of speech on their website. 

While it can be a nuanced conversation, approaching reform through the lens of enabling responsibility and accountability while protecting responsible moderation, is key. It has been a long-winded conversation, and the hope is that policymakers will act quickly to improve the digital environment in which children exist. 

If the Good Samaritan has created an alley with blind spots, dark corners and malfunctioning streetlights, we cannot keep making excuses for him if travellers are wounded, neither can we embrace his claim that he must be rid of liability, especially if he makes some money from the bludgeoning of travellers. 

Leave a Reply

Your email address will not be published. Required fields are marked *