Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability
Aysha Vear
In an effort to better understand the data collection and use practices of major social media and video streaming services (SMVSSs), the Federal Trade Commission issued orders to file Special Reports under Section 6(b) of the FTC Act[1] to nine companies in 2020.[2] The orders sought to understand how the companies collect, track, and use their consumers’ personal and demographic information; how they handle advertising and targeted advertising; whether they apply algorithms, data analytics, and artificial intelligence (AI) to consumer information; and how their practices impact children and teens.[3] Titled, “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services,” the 2024 report has been four years in the making and a key but unsurprising finding was that the business model of targeted advertising was the catalyst for extensive data gathering and harmful behaviors, and companies failed to protect users, particularly children and teens.[4]
Data Practices and User Rights
Companies involved in the FTC report collected a large amount of data about consumers’ activity on their platforms and also gleaned information about consumers’ activity off of the platforms which exceeded user expectations.[5] The Commission found that a massive amount of data was collected or inferred about users including demographic information, user metrics, or data about their interaction with the network.[6] With respect to specific privacy settings, many companies did not collect any information at all about user changes or updates to their privacy settings on the SMVSSs.[7]
The information came from many places as well. Some information on users collected by the companies was directly input by the SMVSS user themselves when creating a profile; passively gathered from information on or through engagement with the SMVSS; culled from other services provided by company affiliates or other platforms; inferred from algorithms, data analytics, and AI; or from advertising trackers, advertisers, and data brokers. Data collected was used for many different purposes including for targeted advertising, AI, business purposes like optimization and research and development, to enhance and analyze user engagement, and to infer or deduce other information about the user.[8] In addition, most companies deliberately tracked consumer shopping behaviors and interests.[9] Little transparency, if any, was provided on the targeting, optimization, and analysis of user data.
Transparency and Accountability
A disturbing trend was the use of data obtained by third parties on both users and non-users[10] and a general practice of not being able to identify exactly where the data came from. No single company was able to provide a comprehensive list of all the third-party entities with whom they shared information.[11] Most often the sharing was for facilitating the operation and function of the site, but sometimes information was shared with service providers and vendors for data analytics, a concept that was vaguely and inconsistently defined. When sharing with affiliates or branded entities, the companies had no additional contracts but relied on their own policies to protect privacy. Information shared with outside third parties in contrast used standard contractual language which was not tailored to the circumstances, and which might not be sufficient to protect privacy. To make matters worse, no company reported conducting audits or any due diligence to ensure compliance,[12] nor was a formal vetting and approval process in place prior to sharing.[13] Efforts by the companies to limit the collection, use, disclosure, and retention of the information was also varied and inconsistent. To minimize data collection, the companies reported that they only collected what was necessary, but few implemented deidentification, providing users control in management of their data, or anonymization or pseudonymization, indicating minimal efforts to minimize collection or protect information.[14] When asked for actual minimization policies, companies provided either vague policies or none at all.
Despite having a business model built upon maximizing data collected for revenue streams, all of the companies surveyed claimed to have written data deletion and retention policies. However, only half could produce actual written policies specifically narrowed to retention and deletion, and the rest shared catchall policies, guidelines, and written documents.[15] This means that although companies claim to be upholding data minimization, retention, and deletion principles, in practice they are not doing so as consistently or comprehensively as consumers believe.
Section 230 and the FTC Report
Section 230 of the Communications Decency Act is the common vehicle for individuals who are suing an online platform for harm caused by third-party content. Jeff Kosseff, a scholar on the statute, claims that Section 230 “created the internet as we know it”[16] through the “Good Samaritan” subsection which states: “’No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’”[17] Section 230 essentially allows platforms to host user-generated content without being held liable for what users post. Distributor liability is applied to determine whether the platforms themselves were publishers of third-party content and could be held negligently liable, or if they simply distributed the content, which would mean liability only attaches if they knew or had reason to know the information was defamatory.[18] In any case, the key question is that of editorial control.
“By preventing any online platform from being ‘treated as the publisher or speaker’ of third-party content,” one scholar notes, “it has enabled the business models of the technology giants that dominate the digital public sphere.”[19] Platforms have since been able to grow quickly, avoid litigation, and profit from the ever-growing data produced by user interaction with content and advertising. The issue with Section 230 has been that it was originally created to combat pornography online, so it grants protection from liability for social media platforms, but complicatedly it “also shields social media companies from external input on their algorithms and platform design”[20] as algorithms have been viewed as a form of content moderation. The language of Section 230 can be interpreted broadly to absolve the platform of all liability for third-party content, or narrowly to allow liability in specific contexts,[21] but some suggest that Section 230 immunity is still too broad.[22]
The findings by the FTC discussed earlier are symptoms of the Section 230 immunity for large social media platforms. Because they could grow quickly and profit from data sharing virtually unregulated, there is a gap in accountability for data practices and the protection of consumer rights. Platforms can maximize the data they collect and profit from while being largely protected since their platforms are free and rely on user generated content and interaction. Furthermore, because of its age, Section 230 does not directly address transparency or privacy in an increasingly connected world. This requires a greater need for regulation to ensure transparency from platforms about the data they collect and use as the prevalence of algorithms and AI in virtually every aspect of daily life continues to grow.
A New Wave of Section 230 Claims
Section 230 was enacted in 1996 but whether or not it fully absolves distributors of liability has not been before the courts in years.[23] So far, it “has offered broad immunity to social media companies for the last quarter-century,” but there is a call across the board, from all branches of government, to amend the scope of the shield,[24] and what this looks like is uncertain. More recent legal debates around Section 230 have focused on whether platforms are liable when they promote illegal or harmful content, for example through algorithmic recommendations[25] or by verifying an official profile via verification badges.[26]
In Gonzalez v. Google LLC[27] the Ninth Circuit held that a website is not liable for third party content supplied by neutral algorithmic recommendation tools based on user inputs.[28] The family of California college student Nohemi Gonzalez sued Google, arguing that by promoting ISIS content to users via its algorithms on YouTube it indirectly aided ISIS in carrying out the 2015 attack in Paris which killed their daughter.[29] The claim was barred in the district and circuit courts, but the Supreme Court heard oral arguments in 2023. The inquiry “hinged on what it means for a legal claim to “treat[]” a platform “as the publisher” of third-party content.”[30] The family argued for a narrow reading of Section 230 that focused on publisher liability and which “seeks to hold the defendant for harms that go beyond mere transmission.”[31] The Court ultimately “punted” the Section 230 issue in Gonzalez and held that the plaintiff’s tort claim failed But despite the wariness and confusion in allocating distributor liability to social media platforms and their algorithms, the question remains.
This reconceptualization of Section 230 in the modern age was renewed by litigation against the popular video service YouTube in California. In Wozniak v. YouTube,[32] Steve Wozniak, the co-founder of Apple, had an official YouTube channel which was compromised, and later promoted a Bitcoin scam. The Santa Clara County Superior Court in California dismissed most of Wozniak’s claims based on Section 230. But the case has a second chance, as the California Court of Appeals of the Sixth District reversed the lower court’s ruling this year. It stated the claim that Google and YouTube “materially contributed” to the harm when it issued a verification badge to a compromised channel, thereby promoting the scam and potentially falling outside of Section 230’s applicability.[33] The court maintained that “existing precedent holds that where a website operator either creates its own content or requires users to provide information and then disseminates it, thereby materially contributing to the development of the unlawful information, it may be considered responsible for that information, and thus . . . an “information content provider.”[34] The plaintiffs were granted leave to amend the claim. In essence, YouTube may not be able to fully shield itself from liability under Section 230 of the Communications Decency Act.
While the future for Section 230 remains unclear, agencies like the FTC and courts across the country will need to deal with the constraints of the existing doctrine. The Gonzalez and Wozniak cases highlight a potential avenue for reform and recalibration of Section 230 and could necessitate greater accountability and transparency for large tech companies whose business is in consumer data. Without the liability shield of Section 230, “platforms would either be open to massive amount of litigation cost or would have to sharply curtail how much user content they host, thus leading to potentially lower user ‘engagement.’”[35]
Conclusion
Courts have shown that they will require plaintiffs to contort themselves into pretzels to hold big technology companies accountable for the harms they perpetuate while claiming to protect consumers through privacy policies and practices. Nevertheless, the findings of the FTC report, coupled with recent case law involving large social media platforms underscores the need both from industry and citizenry alike for guidance and regulation. Otherwise, as identified by the key findings of the FTC, businesses can (and are incentivized to) continue unchecked in amassing troves of personal information that they can sell to the highest bidder. As Section 230’s future is in limbo and the debate for content liability continues, the FTC report shows that a similar discussion is revolving around platforms’ responsibility for data privacy with respect to the information that consumers entrust them with daily. Platforms today are both moderators of the content that they host and collectors of massive amounts of user data, the manipulation of which via algorithms has harmful and invasive effects. While the FTC report is a snapshot of the industry at a particular time, its findings of the pervasiveness of data collection has only grown since the orders. Emerging technology and AI have continued to complicate this problem. Comprehensive reform, be it through an amendment to Section 230 or a federal privacy regulation, will necessitate change in the way that platforms operate and the business models on which they rely.
[1] Press Release, Fed. Trade Comm’n, FTC Issues Orders to Nine Social Media and Video Streaming Services Seeking Data About How They Collect, Use, and Present Information (Dec. 14, 2020), https://www.ftc.gov/news-events/news/press-releases/2020/12/ftc-issues-orders-nine-social-media-video-streaming-services-seeking-data-about-how-they-collect-use. (“The FTC is issuing the orders under Section 6(b) of the FTC Act, which authorizes the Commission to conduct wide-ranging studies that do not have a specific law enforcement purpose.”).
[2] Staff Report, Fed. Trade Comm’n, A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services (Sept. 19, 2024) at 8-9 (“The companies included were Amazon.com, Inc., which owns the gaming platform Twitch; Facebook, Inc. (now Meta Platforms, Inc.); YouTube LLC; Twitter, Inc. (now X Corp.); Snap Inc.; ByteDance Ltd., which owns the video-sharing platform TikTok; Discord Inc.; Reddit, Inc.; and WhatsApp Inc.”).
[3] Id.
[4] Id.; Lena Cohen, FTC Report Confirms: Commercial Surveillance is Out of Control, Electronic Frontier Foundation, Sept. 26, 2024.
[5] Cohen, supra note 4.
[6] Staff Report, Fed. Trade Comm’n, supra note 2 at 17-18.
[7] Id. at 18.
[8] Id. at 19.
[9] Id. at 20.
[10] Id. at 23.
[11] Id. at 25.
[12] Id. at 28.
[13] Id. at 29.
[14] See id. at 30.
[15] See id. at 31.
[16] See Matthew F. Carlin, Real Harm to Real People: A Restorative Justice Theory for Social Media Accountability, 51 N. KY. L. REV. 145, 160 (2024).
[17] Id.
[18] See Alan Z. Rozenshtein, Interpreting the Ambiguities of Section 230, 41 Yale J. On Reg. Bull. 60, 63-64 (2024).
[19] Id. at 61.
[20] See Carlin, supra note 16 at 147.
[21] See Rozenshtein, supra note 18at 63.
[22] See Carlin, supra note 16 at 161.
[23] See Rozenshtein, supra note 18 at 70-71.
[24] Carlin, supra note 16 at 179.
[25] Rozenshtein, supra note 18 at 71.
[26] Ethan Baron, Apple co-founder Steve Wozniak wins latest round in lawsuit vs. YouTube over Bitcoin scam, The Seattle Times (Mar. 21, 2024), https://www.seattletimes.com/business/apple-co-founder-steve-wozniak-wins-
latest-round-in-lawsuit-vs-youtube-over-bitcoin-scam/.
[27] Gonzalez v. Google LLC, 2 F.4th 871, 892 (9th Cir. 2021).
[28] Id. at 895.
[29] Rozenshtein, supra note 18 at 72.
[30] Id.
[31] Id.
[32] Wozniak v. YouTube, LLC, 319 Cal. Rptr. 3d 597, 622 (2024), as modified on denial of reh’g (Apr. 2, 2024).
[33] Baron, supra note 26.
[34] Wozniak, 319 Cal. Rptr. 3d at 624.
[35] Rozenshtein, supra note 18 at 80.