Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability

Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability

Aysha Vear

 

In an effort to better understand the data collection and use practices of major social media and video streaming services (SMVSSs), the Federal Trade Commission issued orders to file Special Reports under Section 6(b) of the FTC Act[1] to nine companies in 2020.[2] The orders sought to understand how the companies collect, track, and use their consumers’ personal and demographic information; how they handle advertising and targeted advertising; whether they apply algorithms, data analytics, and artificial intelligence (AI) to consumer information; and how their practices impact children and teens.[3] Titled, “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services,” the 2024 report has been four years in the making and a key but unsurprising finding was that the business model of targeted advertising was the catalyst for extensive data gathering and harmful behaviors, and companies failed to protect users, particularly children and teens.[4]

Data Practices and User Rights
Companies involved in the FTC report collected a large amount of data about consumers’ activity on their platforms and also gleaned information about consumers’ activity off of the platforms which exceeded user expectations.[5] The Commission found that a massive amount of data was collected or inferred about users including demographic information, user metrics, or data about their interaction with the network.[6] With respect to specific privacy settings, many companies did not collect any information at all about user changes or updates to their privacy settings on the SMVSSs.[7]

The information came from many places as well. Some information on users collected by the companies was directly input by the SMVSS user themselves when creating a profile; passively gathered from information on or through engagement with the SMVSS; culled from other services provided by company affiliates or other platforms; inferred from algorithms, data analytics, and AI; or from advertising trackers, advertisers, and data brokers. Data collected was used for many different purposes including for targeted advertising, AI, business purposes like optimization and research and development, to enhance and analyze user engagement, and to infer or deduce other information about the user.[8] In addition, most companies deliberately tracked consumer shopping behaviors and interests.[9] Little transparency, if any, was provided on the targeting, optimization, and analysis of user data.

Continue reading

Anderson v. TikTok: a New Challenge to § 230 Immunity

Anderson v. TikTok: a New Challenge to § 230 Immunity

John Blegen

 

In August 2024, the 3rd Circuit overturned a Pennsylvania District court’s decision to grant summary judgment to TikTok, quashing a suit brought by Tawainna Anderson.[1] Anderson sued on behalf of her deceased daughter Nylah, alleging products liability, negligence, and wrongful death claims after the ten-year-old died of self-asphyxiation after watching numerous videos TikTok routed to her for-you page.[2] The videos, created by third-parties and then uploaded to TikTok, encouraged users to choke themselves with “belts, purse strings, or anything similar,” as part of a viral “blackout challenge.”[3] Nylah’s mother found her daughter asphyxiated in the back of a closet after the ten-year-old had tried to recreate one such video.[4]

The District Court for the Eastern District of Pennsylvania originally dismissed Anderson’s complaint on grounds that TikTok was shielded from liability for content created by third parties under § 230 of the Communications Decency Act.[5] But on appeal, the 3rd Circuit rejected this claim, holding that while § 230 may protect social media platforms such as TikTok from suit for content provided by third party users, in this case, it was TikTok’s own algorithm that was the subject of the lawsuit.[6] This follows a recent Supreme Court decision, Moody v. NetChoice, which held that the algorithms of social-media platforms may themselves be “expressive product” protected under the 1st amendment, and therefore, subject to greater legal scrutiny.[7] In the court’s words: “Because the information that forms the basis of Anderson’s lawsuit – TikTok’s recommendations via its FYP algorithm – is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”[8]

Since the August ruling, commentators have noted how impactful this case could be for internet content regulation and the social-media industry at large.[9] David French, a legal scholar employed at the New York Times, wrote, “Nylah’s case could turn out to be one of the most significant in the history of the internet.”[10] Leah Plunket, another legal scholar, speaking specifically on the impact this ruling will have on companies’ legal counsel: “My best guess is that every platform that uses a recommendation algorithm that could plausibly count as expressive activity . . . woke up in their general counsel’s office and said, ‘Holy Moly.’”[11]

Continue reading

Privacy Needs Security, Security Needs Privacy

Privacy Needs Security, Security Needs Privacy 

William O’Reilly

 

     I.         Introduction

Security Operations Centers (SOC) for enterprises across the country are in need of professionals. They need professionals to fill the roles that already exist, and they need to add roles to deal with the changing regulatory landscape. For an enterprise, the best practice is an investment in “people, process, and technology.[1] It is true that people are the most expensive part of an SOC.[2] However, the reason there is a shortage is not because enterprises around the US are skimping on their labor. There simply are not enough trained professionals. The training to be a cybersecurity professional is not easy, nor is it cheap. Enterprises are in danger from their absence of professionals, and it may be worth it for them to shoulder the cost of education and certification in pursuit of their goal of self-preservation. One cost the enterprise will have to face in hiring professionals is the establishment of career potential and pay There is also an ongoing cost for organizations that need to have instances of training to level up their employees over time.[4] Training also assists with retention of personnel, making it a necessary cost to the enterprise.[5] Finally, burgeoning privacy laws create burdens and liabilities that the SOC in its present form is only partially equipped to deal with. Fortunately, over 20% percent of enterprises plan to increase their investment in cybersecurity post breach.[6] That investment should include privacy professionals.

Potential employees have costs associated with education and skill development. The cost of training, education, and certifications can be a limit on professionals entering the cybersecurity industry. No SOC will have the same composition or volume, but most SOC services demand certain roles be filled by professionals with specific training. Legislation is also demanding those roles be filled.[7] Each of these professions has specific responsibilities, which require specific skills, and each of those skills can be represented through certifications.[8] Each of these certifications has a cost. Laying out this cost may illustrate one reason for the dearth in skilled professionals and may show an enterprise the value that a professional expects to get out of their investment.

Continue reading

Google’s New AI-Powered Customer Service Tools Spark Back-to-Back Class Action Lawsuits

Google’s New AI-Powered Customer Service Tools Spark Back-to-Back Class Action Lawsuits

Zion Mercado 

 

Google recently began rolling out “human-like generative AI powered” customer service tools to help companies enhance their customer service experience.[1] This new service is known as the “Cloud Contact Center AI,” and touts a full package of customer service-based features to help streamline customer service capabilities.[2] Companies who utilize the new service  can create virtual customer service agents, access AI-generated insights providing feedback on customer service interactions, store and manage data on a specialized “Contact Center AI Platform,” and consult with Google’s team of experts on how to improve the AI-integrated systems.[3] However, one key feature that has recently come into controversy is the ability for customers to utilize real-time AI-generated responses to customer inquiries which can then be relayed back to the customer by a live agent.[4] This is known as the “Agent Assist” feature.

Agent Assist operates by “us[ing] machine learning technology to provide suggestions to . . . human agents when they are in a conversation with a customer.”[5] These suggestions are based on the company’s own data and conversations.[6] Functionally, when Agent Assist is in use, there are two parties to the conversation: the live customer service agent, and the customer. The AI program listens in and generates responses in real time for the live customer service agent. Some have argued that this violates California’s wiretapping statute by alleging that the actions of Google’s AI program, which is nothing more than a complex computer program, are attributable to Google itself.[7] Those who have done so have alleged that Google, through its AI-integrated services, has been listening in on people’s conversations without their consent or knowledge.[8]

The wiretapping statute in question is a part of the California Invasion of Privacy Act (“CIPA”), and prohibits the intentional tapping, reading, or any other unauthorized connection, whether physically or otherwise, with any communication being transmitted via line, wire, cable, or instrument without the consent of all parties to the communication.[9] It is also unlawful under the statute to communicate any information so obtained or to aid another in obtaining information via prohibited means.[10]

In 2023, a class action lawsuit was filed against Google on behalf of Verizon customers who alleged that Google “used its Cloud Contact Center AI software as a service to wiretap, eavesdrop on, and record” calls made to Verizon’s customer service center.[11] In the case, District Court Judge Rita F. Lin granted Google’s motion to dismiss on grounds that the relationship between Google and Verizon and the utilization of the Cloud Contact Center AI service fell squarely within the statutory exception to the wiretapping statute.[12] Now, the wiretapping statute does contain an explicit exception for telephone companies and their agents, which is the exception upon which Judge Lin relied; however, that exception is narrowed to such acts that “are for the purpose of construction, maintenance, conduct or operation of the services and facilities of the public utility or telephone company.”[13]

Continue reading

House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking

House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking

William Simpson

 

Introduction

As is the case for many federal agencies,[1] the Department of Housing and Urban Development (HUD) is intent on addressing the risk of algorithmic discrimination within its primary statutory domain—housing. But in the wake of Loper Bright,[2] which overturned Chevron[3] deference, and with it the general acquiescence of federal courts to agency interpretations of relevant statutes, HUD is forced to regulate AI and algorithmic decision-making in the housing context through guidance documents and other soft law mechanisms.[4] Such quasi-regulation impairs the efficacy of civil rights law like the Fair Housing Act[5] (FHA) and subjects marginalized groups to continued, and perhaps increasingly insidious,[6] discrimination. With HUD crippled in terms of effectuating meaningful AI regulation, states like Maine—which remains a Chevron state—must step up within their respective jurisdictions to ensure that algorithmic discrimination is mitigated in the housing sector.

 

A Brief Primer on Chevron and Loper Bright

In 1984, the Supreme Court held that where a “statute is silent or ambiguous with respect to a specific issue . . . a [federal] court may not substitute its own construction of [the statute] for a reasonable interpretation made by the administrator of an agency.”[7] In other words, where an agency interpretation of an ambiguous statute is reasonable, a court must defer to the agency. Proponents of Chevron deference have heralded the opinion for its placement of policy decisions in the hands of expert and politically accountable agencies,[8] whereas detractors deemed it a violation of the separation of powers doctrine.[9] In June 2024, the detractors won out.

Chevron is overruled,” wrote Chief Justice John Roberts.[10] To wit, “courts need not and under the APA may not defer to an agency interpretation of the law simply because a statute is ambiguous.”[11] Roberts rested his opinion on the separation of powers principle,[12] a textualist construction of § 706 of the Administrative Procedure Act,[13] a historical analysis,[14] the insurance of Skidmore deference,[15] and the fact that Chevron was subject to numerous “refinements” over the years.[16]

It goes without saying that this jurisprudential U-turn has profound implications for HUD and the statutes it implements.[17] As a result of Chevron’s demise, “any rulemaking proposed by HUD . . . may be more vulnerable to lawsuits than in years past.”[18] Namely, HUD relies on the FHA to authorize its policies, which “broadly describes . . . prohibited discriminatory conduct,” and which HUD interprets “into enforceable directives to serve Congress’ stated goals.”[19] Without Chevron deference, HUD’s interpretations of the FHA are certain to be questioned, and significant barriers for Americans facing housing discrimination will arise.[20]

 

HUD’s Effort to Combat Algorithmic Discrimination in a Post-Chevron Paradigm

In apparent anticipation of such challenges to its interpretations, HUD has resorted to soft law mechanisms like guidance documents to combat algorithmic discrimination. Importantly, these informal mechanisms do not carry the force of law, and are therefore outside the scope of Chevron deference and unaffected by the Loper Bright decision.[21] Such documents include HUD’s “Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing,”[22] and “Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms.”[23] The former pronouncement examines how housing providers and tenant screening services can evaluate rental applicants in a nondiscriminatory way—including by choosing relevant screening criteria, using accurate records, remaining transparent with applicants and allowing them to challenge decisions, and designing screening models for FHA compliance.[24] Of note, the document confirms that the FHA “applies to housing decisions regardless of what technology is used” and that “[b]oth housing providers and tenant screening companies have a responsibility to avoid using these technologies in a discriminatory manner.”[25]

Alternatively, the latter document “addresses the increasingly common use of automated systems, such as algorithmic processes and Artificial Intelligence . . . to facilitate advertisement targeting and delivery” vis-à-vis housing related transactions.[26] Like tenant screening services, algorithmic targeting and delivery of advertisements “risks violating the [FHA] when used for housing-related ads,” and can implicate both advertisers and ad platforms.[27] For example, liability may arise by using algorithmic tools to “segment and select potential audiences by [protected] category,” “deliver ads only to a specified ‘custom’ audience,” or “decide which ads are actually delivered to which consumers, and at what location, time, and price.” [28]  The document recommends that advertisers use ad platforms that proactively mitigate discriminatory practices and that they “monitor outcomes of ad[] campaigns for housing-related ads.”

Indeed, “[w]hile the guidance represents an important step forward in safeguarding housing rights, it isn’t currently more than a suggestion to housing providers.”[29] Hence the dilemma facing regulators in this post-Chevron paradigm: issue a formal rule that will provide the intended protection but is prone to litigation, or deliver informal pronouncements that remain largely immune to challenge but fail to offer enforceable requirements against harmful practices.[30] As this administrative predicament persists, it is state governments, including Maine, that must fill the resulting void.

Continue reading

State Data Privacy & Security Law as a Tool for Protecting Legal Adult Use Cannabis Consumers and Industry Employees

State Data Privacy & Security Law as a Tool for Protecting Legal Adult Use Cannabis Consumers and Industry Employees

By: Nicole Onderdonk

1. Introduction

The legalization of adult use cannabis[1] at the state level, its continued illegality at the federal level, and the patchwork of privacy regulations in the United States has generated interesting academic and practical questions around data privacy and security.[2]  At risk are the consumers and employees participating in the legal recreational cannabis marketplace— particularly, their personal information.[3]   For these individuals, the risks of unwanted disclosure of their personal information and the potential adverse consequences associated with their participation in the industry varies significantly depending on which state an individual is located in.[4]  Further, while these are distinct risks, the unwanted disclosure of personal information held by cannabis market participants may significantly increase the degree and likelihood of an individual experiencing adverse employment-related consequences due to recreational cannabis use.  Therefore, data privacy and security laws can and should be deployed by states as a tool to not only protect legal adult use cannabis consumers’ and employees’ personal information, but also their interests and rights more broadly related to their participation in the legal cannabis market.

Privacy law and cannabis law are both arenas where states are actively engaged in their roles in the federalist system as “laboratories of democracy.”[5]  The various state-by-state approaches to protecting consumer and employee data privacy and legalizing recreational cannabis have taken various shapes and forms, akin to other areas of the law where there is an absence or silence at the federal level.  This divergence may create problems and concerns,[6] but it also may reveal novel solutions.  Regarding the personal data of recreational cannabis consumers and industry employees, the strongest solution that emerges from an analysis of the current state-by-state legal framework is a hybrid one—taking the most successful aspects from each state’s experimentation and deploying it to protect legal adult use cannabis market participants from collateral adverse consequences.

Continue reading

Privacy Concerns with Health Care Providers’ Use of Personal Devices for Medical Images

Privacy Concerns with Health Care Providers’ Use of Personal Devices for Medical Images

By: Deirdre Sullivan

Last year I had to go to urgent care for a second degree burn on my chest after spilling boiling hot tea on myself. I was surprised when the provider took a photo of my burn, in a relatively sensitive area, with her own cell phone to upload to the medical file. Seeing my surprise, she assured me that this was through a secure application and the photo of my chest was not actually stored on her phone.

 

The following week, my primary care provider did the same thing to continue tracking the burn’s progress. I also expressed the same concerns, and she went further by showing me that the photo was not stored on her camera roll.

 

While I trusted these two female providers, I was still skeptical and imagined all the ways that this could go wrong for a patient. The practice of using personal devices for imaging is ripe for abuse, and this blog post will explore potential harms to patients as well as liability for health care providers.

 

Patients have a reasonable expectation of privacy in their images not being shared past what is necessary to provide care, and it is without dispute that the practice of using personal devices to photograph patients violates this. There is a tension here between what is best for the privacy interests of the patient being photographed, and the business needs of the healthcare entity in reducing the cost of having devices on hand for providers while also increasing access to devices for taking pictures to document injuries in the medical file or for sharing with other providers for consult.

 

First, there are two different possibilities for how the image could be captured and stored on a provider’s cell phone. The provider could directly take the image without the use of a secure app to store on their phone for purposes of a consult with another provider, or the provider could deceptively take an image under the guise of just using a secure app and then hide it from their patient. This could easily happen by a provider switching between a secure healthcare app and their own camera app to take a photo, and then hiding that from the patient by showing them the last photo from an album, rather than the last photo of their camera roll. Or even a provider taking a screenshot of a sensitive photo on the secure app.

 

In either scenario it would be extremely difficult for the patient to catch the violation of their privacy. Most often these photos are not of faces, making it difficult to identify and track once the photo makes it off the provider’s phone either by intentional sharing, or the phone being stolen or hacked. Further, patients are at a disadvantage and may not know to worry about improper photos being taken or that sensitive photos are stored on their provider’s phone and distributed to other persons.

Continue reading

A Balancing Act: The State of Free Speech on Social Media for Public Officials

A Balancing Act: The State of Free Speech on Social Media for Public Officials 

By: Raaid Bakridi

1. Introduction

Blocking someone on social media often seems inconsequential since it’s a digital medium and people do it every day.[1] However, the U.S. Supreme Court has an alternative view, especially when the person who commits the act is a public official. The Court held that, in some instances, public officials can be liable for First Amendment violations when they block anyone from their social media page Writing for the majority, Justice Barrett adopted a two-prong test to be used in instances involving public officials and their social media accounts because distinguishing between on- and off-the-job activity is frequently a “difficult [line] to draw”[3] and a “fact-intensive inquiry.”[4] The distinction, according to Justice Barrett, “turns on substance, not labels.”[5] But this isn’t the first time that the Court has been asked to weigh in on social media cases where public officials block their critics, cases which by nature involve possible First Amendment and public forum concerns.

 

2. Background

Former State Assemblyman Dov Hikind filed a lawsuit against Congresswoman Alexandria Ocasio-Cortez for blocking him on Twitter, now known as X. Hikind claimed that the Congresswoman violated his First Amendment rights by blocking him and other individuals critical of her. This raises concerns about politicians’ and public officials’ use of social media and its implications for free speech. Several of the lower courts have dealt with similar social media blocking issues, and each applied different approaches, leading to a split in authority among the Federal Circuit Courts. When confronted with the issue of blocking, the Second, Fourth, Fifth, Sixth, Eighth, and Ninth Circuit Courts have all used variations of two tests: a totality of the circumstances approach or an appearance-focused approach[10]

In 2021, the Supreme Court had to deal with a similar issue that involved the then-sitting President of the United States, Donald Trump. A group of individuals, including the Knight First Amendment Institute, filed a lawsuit against the President,[11]  alleging that their First Amendment rights were violated after they were blocked for criticizing his policies. The District Court agreed,[12] and the Second Circuit upheld the decision.[13] Following this, President Trump appealed to the Supreme Court for a review which was denied.[14] After eleven consecutive conferences on the case, the Court sent it back to the Second Circuit to dismiss as moot.[15]

Although no majority opinion was offered, Justice Thomas wrote a detailed concurrence that essentially “highlights the principal legal difficulty that surrounds digital platforms—namely, that applying old doctrines to new digital platforms is rarely straightforward.”[16] Justice Thomas further noted that the case highlights two important facts: “[t]oday’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors … We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”[17] Justice Thomas then concluded that the Trump case was not the right one to do so[18] and that the Court will have to address constitutional constraints on privately owned digital mediums sooner or later.

Continue reading

Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Continue reading

The Application of Information Privacy Frameworks in Cybersecurity

The Application of Information Privacy Frameworks in Cybersecurity

By Dale Dunn

PDF LINK

INTRODUCTION

The frequency of cyberattacks is increasing exponentially, with human-driven ransomware attacks more than doubling in number between September 2022 and June 2023 alone.[1] In a vast majority of attacks, threat actors seek to penetrate legitimate accounts of their target’s employees or the accounts of their target’s third-party service provider’s employees.[2] In the remaining instances, threat actors exploit existing vulnerabilities to penetrate their target’s systems.[3] Combatting these attacks requires a holistic, whole-of-society approach.

Current technology and security norms leave room for improvement. The Cybersecurity and Infrastructure Security Agency (CISA) describes current technology products as generally being vulnerable by design (“VbD”).[4] To help companies produce secure products instead, CISA, in combination with its partners, has proposed the Secure by Design (“SBD”) framework.[5] However, SBD will not be sufficient on its own to prevent threat actors from succeeding. The quantity and availability of personal information available today enables threat actors to efficiently bypass security measures.

The Fair Information Practice Principles (“FIPPs”) and the Privacy by Design (“PBD”) framework should be implemented in addition to SBD to reduce both the likelihood and the potential harm of successful cybersecurity attacks. The FIPPs are procedures for handling data that mitigate the risk of misuse.[6] PBD is a supplementary method of mitigating the potential harm that can result from data in a system or product.[7] While both the FIPPs and PBD were developed for use with personal information, they can and should apply beyond that specific context as a way of thinking about all data used and protected by information systems.

This paper is arranged in five sections. The first section describes the requirement of reasonable security. The second section then explains the Secure by Design framework. Section three, the FIPPs and PBD. Section four provides a case study in which social engineering is utilized by a threat actor to conduct cyberattacks. Finally, section five recommends measures companies and other organizations should take to implement the SBD, FIPPs, and the PBD. In sum, this paper will show information privacy principles and methodologies that should be implemented to reduce the risk of cybersecurity attacks.

Continue reading