Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Emily Burns

 

Introduction 

The Freedom of Information Act (“FOIA”) is the principal mechanism that allows people to request records held by agencies within the Federal government.[1] In the immigration context, a very common type of FOIA record request is for an A-file, which is a record of every interaction between a non-citizen and an immigration related federal agency.[2]

For people in immigration proceedings, obtaining an A-File allows noncitizens and their lawyers to access information crucial to defending against deportation or gaining immigration benefits, such as entry and exit dates from the United States, copies of past applications submitted to Federal agencies, or statements made to U.S. officials.[3] To obtain an A-File, non-citizens must affirmatively request the file through FOIA from an agency such as United States Citizenship and Immigration Services (USCIS) or Immigration and Customs Enforcement (ICE).[4] However, one carve-out to this process exists, available only in the Ninth Circuit: Dent motions.[5] Dent motions exist due to the case of Dent v. Holder, where the Ninth Circuit recognized that the government violated Sazar Dent’s right to Due Process when it required Mr. Dent to request his A-File through FOIA rather than summarily handing the file over to him when requested in a prior court proceeding.[6]

Continue reading

Honesty is the Best (Privacy) Policy: The Importance of Transparency in Disclosing Data Collection for AI Training

Honesty is the Best (Privacy) Policy: The Importance of Transparency in Disclosing Data Collection for AI Training

Alexandra Logan

 

Introduction

This past July, the Federal Trade Commission (“FTC”), Department of Justice, and a number of international antitrust enforcers issued a Joint Statement on Competition in Generative AI Foundation Models and AI Products. The Joint Statement details that “[f]irms that deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy . . . it is important that consumers are informed . . . about when and how an AI application is employed in the products and services they purchase or use.” Alleged unfair and deceptive acts or practices (“UDAP”) can be investigated by the FTC via Section 5 of the FTC Act.[2] Consumers are looking for more ways to limit the ability of companies to collect and use their data for AI training purposes,[3] and companies should be vigilant in ensuring their privacy policies are up to date and thorough. If companies can keep their privacy policies up-to-date, this can help them to avoid making deceptive or misrepresentative claims about the data that they collect or what they do with it. Recently, X and LinkedIn have come under fire by consumers because of the companies’ data collection practices, and their ambiguous representations and omissions about how they use consumer data.

Continue reading

Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability

Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability

Aysha Vear

 

In an effort to better understand the data collection and use practices of major social media and video streaming services (SMVSSs), the Federal Trade Commission issued orders to file Special Reports under Section 6(b) of the FTC Act[1] to nine companies in 2020.[2] The orders sought to understand how the companies collect, track, and use their consumers’ personal and demographic information; how they handle advertising and targeted advertising; whether they apply algorithms, data analytics, and artificial intelligence (AI) to consumer information; and how their practices impact children and teens.[3] Titled, “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services,” the 2024 report has been four years in the making and a key but unsurprising finding was that the business model of targeted advertising was the catalyst for extensive data gathering and harmful behaviors, and companies failed to protect users, particularly children and teens.[4]

Data Practices and User Rights
Companies involved in the FTC report collected a large amount of data about consumers’ activity on their platforms and also gleaned information about consumers’ activity off of the platforms which exceeded user expectations.[5] The Commission found that a massive amount of data was collected or inferred about users including demographic information, user metrics, or data about their interaction with the network.[6] With respect to specific privacy settings, many companies did not collect any information at all about user changes or updates to their privacy settings on the SMVSSs.[7]

The information came from many places as well. Some information on users collected by the companies was directly input by the SMVSS user themselves when creating a profile; passively gathered from information on or through engagement with the SMVSS; culled from other services provided by company affiliates or other platforms; inferred from algorithms, data analytics, and AI; or from advertising trackers, advertisers, and data brokers. Data collected was used for many different purposes including for targeted advertising, AI, business purposes like optimization and research and development, to enhance and analyze user engagement, and to infer or deduce other information about the user.[8] In addition, most companies deliberately tracked consumer shopping behaviors and interests.[9] Little transparency, if any, was provided on the targeting, optimization, and analysis of user data.

Continue reading

Anderson v. TikTok: a New Challenge to § 230 Immunity

Anderson v. TikTok: a New Challenge to § 230 Immunity

John Blegen

 

In August 2024, the 3rd Circuit overturned a Pennsylvania District court’s decision to grant summary judgment to TikTok, quashing a suit brought by Tawainna Anderson.[1] Anderson sued on behalf of her deceased daughter Nylah, alleging products liability, negligence, and wrongful death claims after the ten-year-old died of self-asphyxiation after watching numerous videos TikTok routed to her for-you page.[2] The videos, created by third-parties and then uploaded to TikTok, encouraged users to choke themselves with “belts, purse strings, or anything similar,” as part of a viral “blackout challenge.”[3] Nylah’s mother found her daughter asphyxiated in the back of a closet after the ten-year-old had tried to recreate one such video.[4]

The District Court for the Eastern District of Pennsylvania originally dismissed Anderson’s complaint on grounds that TikTok was shielded from liability for content created by third parties under § 230 of the Communications Decency Act.[5] But on appeal, the 3rd Circuit rejected this claim, holding that while § 230 may protect social media platforms such as TikTok from suit for content provided by third party users, in this case, it was TikTok’s own algorithm that was the subject of the lawsuit.[6] This follows a recent Supreme Court decision, Moody v. NetChoice, which held that the algorithms of social-media platforms may themselves be “expressive product” protected under the 1st amendment, and therefore, subject to greater legal scrutiny.[7] In the court’s words: “Because the information that forms the basis of Anderson’s lawsuit – TikTok’s recommendations via its FYP algorithm – is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”[8]

Since the August ruling, commentators have noted how impactful this case could be for internet content regulation and the social-media industry at large.[9] David French, a legal scholar employed at the New York Times, wrote, “Nylah’s case could turn out to be one of the most significant in the history of the internet.”[10] Leah Plunket, another legal scholar, speaking specifically on the impact this ruling will have on companies’ legal counsel: “My best guess is that every platform that uses a recommendation algorithm that could plausibly count as expressive activity . . . woke up in their general counsel’s office and said, ‘Holy Moly.’”[11]

Continue reading

Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Continue reading

Digitizing the Fourth Amendment: Privacy in the Age of Big Data Policing

Written by Charles E. Volkwein

ABSTRACT

Today’s availability of massive data sets, inexpensive data storage, and sophisticated analytical software has transformed the capabilities of law enforcement and created new forms of “Big Data Policing.” While Big Data Policing may improve the administration of public safety, these methods endanger constitutional protections against warrantless searches and seizures. This Article explores the Fourth Amendment consequences of Big Data Policing in three parts. First, it provides an overview of Fourth Amendment jurisprudence and its evolution in light of new policing technologies. Next, the Article reviews the concept of “Big Data” and examines three forms of Big Data Policing: Predictive Policing Technology (PPT); data collected by third-parties and purchased by law enforcement; and geofence warrants. Finally, the Article concludes with proposed solutions to rebalance the protections afforded by the Fourth Amendment against these new forms of policing.

Continue reading

Life’s Not Fair. Is Life Insurance?

The rapid adoption of artificial intelligence techniques by life insurers poses increased risks of discrimination, and yet, regulators are responding with a potentially unworkable state-by-state patchwork of regulations. Could professional standards provide a faster mechanism for a nationally uniform solution?

By Mark A. Sayre, Class of 2024

Introduction

Among the broad categories of insurance offered in the United States, individual life insurance is unique in a few key respects that make it an attractive candidate for the adoption of artificial intelligence (AI).[1] First, individual life insurance is a voluntary product, meaning that individuals are not required by law to purchase it in any scenario.[2] As a result, in order to attract policyholders, life insurers must convince customers not only to choose their company over other companies but also convince customers to choose their product over other products that might compete for a share of discretionary income (such as the newest gadget or a family vacation). Life insurers can, and do, argue that these competitive pressures provide natural constraints on the industry’s use of practices that the public might view as burdensome, unfair or unethical and that such constraints reduce the need for heavy-handed regulation.[3]

Continue reading