Anderson v. TikTok: a New Challenge to § 230 Immunity
John Blegen
In August 2024, the 3rd Circuit overturned a Pennsylvania District court’s decision to grant summary judgment to TikTok, quashing a suit brought by Tawainna Anderson.[1] Anderson sued on behalf of her deceased daughter Nylah, alleging products liability, negligence, and wrongful death claims after the ten-year-old died of self-asphyxiation after watching numerous videos TikTok routed to her for-you page.[2] The videos, created by third-parties and then uploaded to TikTok, encouraged users to choke themselves with “belts, purse strings, or anything similar,” as part of a viral “blackout challenge.”[3] Nylah’s mother found her daughter asphyxiated in the back of a closet after the ten-year-old had tried to recreate one such video.[4]
The District Court for the Eastern District of Pennsylvania originally dismissed Anderson’s complaint on grounds that TikTok was shielded from liability for content created by third parties under § 230 of the Communications Decency Act.[5] But on appeal, the 3rd Circuit rejected this claim, holding that while § 230 may protect social media platforms such as TikTok from suit for content provided by third party users, in this case, it was TikTok’s own algorithm that was the subject of the lawsuit.[6] This follows a recent Supreme Court decision, Moody v. NetChoice, which held that the algorithms of social-media platforms may themselves be “expressive product” protected under the 1st amendment, and therefore, subject to greater legal scrutiny.[7] In the court’s words: “Because the information that forms the basis of Anderson’s lawsuit – TikTok’s recommendations via its FYP algorithm – is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”[8]
Since the August ruling, commentators have noted how impactful this case could be for internet content regulation and the social-media industry at large.[9] David French, a legal scholar employed at the New York Times, wrote, “Nylah’s case could turn out to be one of the most significant in the history of the internet.”[10] Leah Plunket, another legal scholar, speaking specifically on the impact this ruling will have on companies’ legal counsel: “My best guess is that every platform that uses a recommendation algorithm that could plausibly count as expressive activity . . . woke up in their general counsel’s office and said, ‘Holy Moly.’”[11]
Defenders of § 230 often emphasize how important the law is to the free flow of user-generated information on the internet and the financial viability of social media websites. Before § 230, if a website conducted no content moderation, they were treated as a distributor and granted immunity from suit for content authored by third parties, the same way a bookstore or a library is generally immune from suit for content authored by a third party. For a plaintiff to succeed against a distributor for hosting illegal content, they need to show that the distributor had knowledge of the illegal content or exercised editorial control, in which case, they would no longer be considered a distributor, but a publisher of the information, held to a similar legal standard as a newspaper.
But this put good actors in a bind: If a website attempted to moderate content and exercise editorial control, then it would be treated as a publisher and subject to greater legal liability. Legal voices quickly picked up on this “perverse incentive” structure that would encourage websites to take a hands-off approach to user content at the risk of being treated as a publisher held to a stricter legal standard if they did moderate. § 230 resolves this bind by establishing that websites will be treated as distributors whether they moderate content or not; either way, a content provider may not be held liable for the content third parties post on their websites.
In the past, courts have interpreted § 230 to grant broad immunity to websites for illegal third party content, meaning that web-providers may not be liable even if they have notice of illegal content or even actively promote it. For instance, in Zeran v. America Online, the Fourth Circuit affirmed a lower court’s dismissal of a negligence suit brought by Ken Zeran, alleging that AOL failed to properly respond to a barrage of offensive posts appearing on an AOL-hosted bulletin board that celebrated the Oklahoma City Bombings, and misled readers to think Zeran, the plaintiff, had created them, while also encouraging users to call Zeran’s phone number, included in the posts. In their ruling, the court wrote, “[t]he imposition of tort liability on service providers for the communications of others represented [another] intrusive government regulation of speech. Section 230 was enacted . . . to maintain the robust nature of Internet communication.”[20]
Similarly, in Jones v. Dirty World Ent. Recordings the sixth circuit reversed a guilty verdict, holding that a web-content provider, DirtyWorld.com, could not be sued for a third-party post that made obscene and potentially libelous claims about the plaintiff, even though the editor of Dirty World selected and curated the offensive posts and contributed his own captions alongside them.[21] The court adopted the reasoning of the Third Circuit, that, “[a]llowing platforms to become liable upon notice . . . would allow plaintiffs to effectively veto online speech that they want removed from the internet.”
Zeran and Dirty World demonstrate courts’ interpretation of § 230 to allow for broad immunity of internet content providers. Critics of broad § 230 immunity note that internet communications can be uniquely harmful and pervasive. As in Zeran, the potential audience for misleading info can be massive, and as in Dirty World, it is often too difficult for the plaintiff to bring suit against the third-parties themselves. If online service providers are granted broad immunity, they may have little incentive to monitor online spaces, thus allowing bad actors to utilize the internet’s massive reach to cause serious harm.
On the other hand, defenders of broad § 230 immunity have rationalized it as essential to freedom of speech over the internet. For instance, Sophia Cope, a senior attorney with the Electronic Frontier Foundation, in conversation with MotherJones following the Anderson v. TikTok third circuit ruling, warned,“‘[i]f you have fewer instances in which 230 applies, then platforms will be exposed to more liability, and that ultimately harms the Internet user.’ . . . A narrower interpretation of Section 230 immunity would make platforms ‘not want to host third party content, or severely limit what users can post,’” and may result in platforms engaging in “‘preemptive censorship’” of user content.[23]
Dissatisfaction with § 230 has sprouted among commentators across the political spectrum. This includes President Biden in 2023 calling on Congress to “rein in” big tech companies, and to amend § 230 to allow for more private suits.[25] Donald Trump has also expressed an intent to do away with § 230, declaring that the law poses “one of the greatest dangers [free speech] has faced in American history.”[26]
Amidst this increasingly public controversy, members of the Supreme Court have also expressed skepticism with § 230’s broad immunity. In Doe Through Roe, plaintiffs did not allege that Snapchat was responsible for third party content merely by hosting it, but that Snapchat had a negligent design that made it too easy for minors to lie about their ages, which left them vulnerable to adult predators. Nevertheless, Snapchat successfully argued that § 230 barred the plaintiff’s claims.[29] While the Supreme Court refused to grant certiorari to plaintiffs, Justice Thomas dissented to the refusal of cert, arguing that, “[i]n the platforms’ world, they are fully responsible for their websites when it results in constitutional protections, but the moment that responsibility could lead to liability, they can disclaim any obligations and enjoy greater protections from suit than nearly any other industry.”
In his dissent, Justice Thomas also cites another Supreme Court case, Moody v. NetChoice, which he predicts will pose new difficulties to tech companies seeking broad § 230 protections. In Moody, various tech trade groups brought suit challenging the constitutionality of Florida and Texas laws that would limit social media companies’ discretion when moderating user content. The Eleventh Circuit upheld a lower court’s injunction of the Florida law after finding that a law regulating content moderation would implicate the First Amendment, while the Fifth Circuit reversed a similar injunction, finding that such a law would not.[33] The Supreme Court vacated both circuit’s decisions on the grounds that neither court had focused their analysis on the facial merits of the laws as they should when a law poses a risk to first amendment protection.[34] The majority emphasizes that, “[t]o the extent that social-media platforms create expressive products, they receive the First Amendment’s protection” and “[if a social media company] present[s] a curated and edited compilation of [third party] speech [then that editorial activity] is itself protected speech.”[35] In other words, content moderation and curation is an expressive activity, deserving of the strict protections of the first amendment.
While Moody may have looked to be a win for social media companies seeking to shield themselves from state laws limiting their ability to content-moderate, in other respects, it may be a “pyrrhic win” in that it opens social-media sites to greater legal liability for their speech.[37] This is the central argument of Anderson v. Tiktok, finding that, “[because] Section 230 immunizes only information ‘provided by another[,]’ and here, . . . the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”[38][39] In other words, if curation is itself a form of speech, protected under the first amendment, and that speech is a creation of the tech companies themselves – not third party users on their sites – then the legal immunity § 230 grants companies in regards to third party content is irrelevant if the curation is the subject of the suit.[40]
If appealed, Anderson v. Tiktok will have to square itself with cases such as Zerhan, which upheld broad immunity for websites even if they knew of harmful content posted there,[41] and Dirty World, where the commentary and curatorial choices made by the website were deemed protected by § 230.[42] Perhaps Judge Matey’s argument in his concurrence to Anderson will sway the already § 230 weary court; as Matey writes, “no one claims the videos Nylah viewed were created by TikTok,”[43] and, reiterating Justice Thomas, “[p]roperly read, § 230(c)(1) says nothing about a provider’s own conduct beyond mere hosting.” Anderson’s suit rests instead on principles of product liability: that Tiktok recommended a barrage of harmful conduct promoting self-asphyxiation via its own faulty algorithm.[45] This conduct, in Matey’s view, is not protected by § 230.[46] § 230 immunizes content providers from legal liability for hosting content provided by third parties, but it does not say anything about the content provider’s own conduct.[47]
Matey’s argument also rests on the argument that Zerhan misinterprets congressional intent behind § 230.[48] For one, there is no evidence within the statute that congress intended to immunize content providers from liability for their own conduct.[49] If we recall, the context of the passage of § 230 was that congress wished to rectify the “perverse incentive” structure whereby web-content providers were encouraged not to moderate content at the risk of being treated like a publisher subject to greater liability if they did.[50] As Daniel Solove, a legal scholar referenced in Matey’s concurrence, puts it: “[w]hy was Section 230 included in a statute that was about promoting decency in Internet speech? The reason is that Congress wanted to encourage ISPs to police content and clean up the yucky stuff.”[51]
It follows that the effect of Anderson v. Tiktok is not to do away with § 230 completely, but to remedy the zealous, faulty interpretations of Zerhan and Dirty World that sought to make websites completely immune to legal responsibility, even for their own misconduct. Thus, the Anderson decision may reemphasize what § 230 was meant to do: encourage websites to promote decency on their platforms.
This would be, in my opinion, an immense benefit to internet discourse, people’s mental health, and the wellbeing of our youth. The complexity of modern algorithms was unheard of in the 1990s. But today, their deleterious effects on public discourse and mental health are profound and well known. These include amplifying biased information, creating filter bubbles, promoting confirmation bias, and worsening peoples’ mental health through an addictive, exploitative business model that offers no social benefit, except to confuse, radicalize, and depress users.
Like any product that operates by exploiting consumers, in the event that the product results in a tragedy, the company should be held liable. These are fundamental principles of strict liability law, which has guided the advent of American consumer capitalism for the last century. If websites were immune from that liability, they would be a radical exception to the rule. What Anderson v. Tiktok seeks to do is return the internet economy to a sensible position. That can only be done if we hold bad actors accountable, and rein in the mistaken precedent of Dirty World and Zerhan. The internet is no longer a fledgling industry that requires statutory immunity to succeed. As shown above, voices on both the right and left are done with “the wild west” days of the internet. Courts should respond by taking heed of these complaints and follow the precedent set by Anderson.
References
[1] Anderson v. TikTok, No. 22-3061, 2024 WL 3948248, at *1-*2 (3d Cir. Aug. 27, 2024).
[2] See id.
[3] See id.
[4] See id.
[5] Id.
[6] See id. at *3.
[7] Id. at *2-*3.
[8] Id. at *3
[9]See Abby Vesoulis, Federal Court: TikTok May Be Liable for a 10-Year-Old’s Death, MotherJones (Aug. 30, 2024), https://www.motherjones.com/politics/2024/08/federal-court-tiktok-230-liable-blackout-challenge-nylah-anderson-death/; see also David French, The Viral Blackout Challenge Is Killing Young People. Courts Are Finally Taking It Seriously, NYTimes (Sept. 5, 2024), https://www.nytimes.com/2024/09/05/opinion/tiktok-blackout-challenge-anderson.html.
[10] See French, supra note 9.
[11] See Vesoulis, supra note 9.
[12] See Jeff Kosseff, A User’s Guide to Section 230, and A Legislator’s Guide to Amending It (or Not), 37 Berkeley Tech L. J. 761, 763 (2022) (“§ 230 is responsible for the open Internet that Americans know today, as platforms are free to allow – or moderate – user content without fearing company-ending litigation.”).
[13]See id. at 769, citing Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 at 140 (S.D.N.Y. 1991) (“A computerized database is the functional equivalent of a more traditional news vendor, and the inconsistent application of a lower standard of liability to an electronic news distributor such as CompuServe than that which is applied to a public library, book store, or newsstand would impose an undue burden on the free flow of information”).
[14]See Kosseff, supra note 12 at 770.
[15] See id. at 770-771, citing Stratton Oakmont v. Prodigy Servs Co., No. 31063/94, 1995 WL 323710, at *1 (N. Y. Sup. Ct. May 24, 1995).
[16] See Kosseff, supra note 12 at 772-776, (statement of Representative Chris Cox from House floor debate of § 230) (“[§230 would] [p]rotect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet, let us say, who takes steps to screen indecency and offensive material for their customers. It will protect them from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem.”).
[17] See 47 U.S.C. § 230(c)(1); see also Kosseff supra note 12 at 773.
[18] See Kosseff, supra note 12 at 778-783; see also Zeran v. America Online, 129 F.3d 327 (4th Cir. 1997) (finding AOL message board not liable for hosting content that associated plaintiff’s phone number with offensive jokes about the Oklahoma City terrorist bombing); see also Jones v. Dirty World Ent. Recordings, 755 F.3d 398, 402-03 (6th Cir. 2014) (finding ribald news site not liable for promoting content that alleged plaintiff carried venereal diseases, among other offensive allegations).
[19] See Kosseff, supra note 12 at 778, citing Zeran, 129 F.3d at 329 (“As a result of this anonymously perpetrated prank, Zeran received a high volume of calls, comprised primarily of angry and derogatory messages, but also including death threats. Zeran could not change his phone number because he relied on its availability to the public in running his business out of his home . . . [Zeran called AOL and was assured the profile would be terminated, but the posts continued,] [five days later,] [b]y April 30, Zeran was receiving an abusive phone call approximately every two minutes”).
[20] Kosseff, supra note 12 at 775 citing Zeran, 129 F.3d at 330.
[21] See Jones, 755 F.3d at 403-407.
[22] Kosseff, supra note 12 at 780.
[23]See Vesoulis, supra note 11.
[24] See generally Lauren Feiner, Biden Wants to Get Rid of Law that Shields Companies Like Facebook From Liability for What Their Users Post, CNBC (Jan. 17, 2020) https://www.cnbc.com/2020/01/17/biden-wants-to-get-rid-of-techs-legal-shield-section-230.html; Abram Brown, What Is Section 230—And Why Does Trump Want To Change It?, Forbes (Mar 21, 2021) https://www.forbes.com/sites/abrambrown/2020/05/28/what-is-section-230-and-why-does-trump-want-to-change-it/.
[25] Brian Fung, Biden Urges Congress to Pass Bipartisan Tech Legislation in WSJ op-ed, CNN (Jan. 11, 2023), https://www.cnn.com/2023/01/11/tech/biden-congress-tech-legislation/index.html.
[26] Brown, supra note 24.
[27] See Doe Through Roe v. Snap, Inc., 144 S. Ct. 2493 (2024) (J. Thomas dissent) (criticizing courts for interpreting § 230 to grant web-platforms immunity even for the website’s own misconduct).
[28] Id. at 2494.
[29] Id. at 2493.
[30] Id. at 2494.
[31] See id. at 2493 (“Because platforms organize users’ content into newsfeeds or other compilations, the argument goes, platforms engage in constitutionally protected speech”).
[32] Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024).
[33] Id. at 2394-97.
[34] See id.
[35] See id. at 2409 (emphasis added).
[36] See id. at 2409.
[37] See Doe Through Roe v. Snapchat LLC, 144 S. Ct. 2493, 2494 (2024) (Thomas, J., dissenting) (“Because platforms organize users’ content into newsfeeds or other compilations, the argument goes, platforms engage in constitutionally protected speech. When it comes time for platforms to be held accountable for their websites, however, they argue the opposite”); see also Anderson v. TikTok, Inc., No. 22-3061, 2024 WL 3948248, at *1-*4 (3d Cir. Aug. 27, 2024) (utilizing the Moody decision in its analysis that Tiktok, by curating harmful content, had acted tortiously).
[38]Anderson, 2024 WL 3948248, at *3(citing 47 U.S.C. § 230(c)(1)).
[39] For info on “for you pages” (FYPs), see Taylor Perry, What is the ‘For You Page’ and why it matters, Foundation (April 26th, 2024), https://foundationinc.co/lab/for-you-page/ (“Using sophisticated algorithms and machine learning, TikTok’s FYP continuously refines and updates its content recommendations based on your interactions, ensuring a captivating and dynamic browsing journey”).
[40] See id.
[41] See Zeran v. America Online, 129 F.3d 327 (4th Cir. 1997).
[42] See Jones v. Dirty World Ent. Recordings, 755 F.3d 398 (6th Cir. 2014).
[43] Id. at *5.
[44] Compare id. at *9 with Doe Through Roe, 144 S. Ct. at 2494.
[45] Anderson, 2024 WL 3948248, at *10.
[46] Id. at *10-*11.
[47] See 47 U.S.C. § 230(c).
[48] See Anderson, 2024 WL 3948248 at *10.
[49] See 47 U.S.C. § 230(c); see also Daniel Solove, Restoring the CDA Section 230 to What It Actually Says, TeachPrivacy (Feb. 4, 2021), https://teachprivacy.com/restoring-the-cda-section-230-to-what-it-actually-says/.
[50] See Kosseff, supra note 12 at 771 (“[§230 would] [p]rotect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet, let us say, who takes steps to screen indecency and offensive material for their customers. It will protect them from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem”) (quoting 104 Cong. Rec. H8470 (daily ed. Aug. 4, 1995) (statement of Rep. Cox)).
[51] Solove, supra note 45.
[52] See generally Stephanie Kulke, Social Media Algorithms Exploit How Humans Learn from their Peers, Northwestern Now, (Aug. 3 2032), https://news.northwestern.edu/stories/2023/08/social-media-algorithms-exploit-how-humans-learn-from-their-peers/ (“the misalignment between the objective of social media algorithms, designed to boost user engagement, and functions of human psychology can lead to increased polarization and misinformation”); Filippo Menczer, Facebook whistleblower Frances Haugen testified that the company’s algorithms are dangerous, The Conversation, (Oct. 7, 2021), https://theconversation.com/facebook-whistleblower-frances-haugen-testified-that-the-companys-algorithms-are-dangerous-heres-how-they-can-manipulate-you-169420 (“[Former Facebook product manager Frances Haugen] called Facebook’s algorithms dangerous, said Facebook executives were aware of the threat but put profits before people, and called on Congress to regulate the company.); KC Ifeanyi, Inside the Good, Bad, and Very Ugly of Social Media Algorithms, Fast Company, (June 24, 2022), https://www.fastcompany.com/90761087/inside-the-good-bad-and-very-ugly-of-social-media-algorithms, (“studies have shown that high levels of social media use have been linked to increased depression and anxiety in both teens and adults”).