AI, a Watchful Eye: The Less than Stellar Performance of AI Security and the Consequences Thereof

AI, a Watchful Eye: The Less than Stellar Performance of AI Security and the Consequences Thereof

James Hotham

 

The use and abuse of widespread camera surveillance is not a novel fear. For decades, media has explored this concept. However, a new threat has arisen in a new form. It has not taken the form of an oppressive government, a terrorist group, or a supreme artificial intelligence. Rather, it comes from private party security providers. Several security providers have begun to work AI into their  security cameras for the use of threat detection.[1] However, the success of these threat detection models is dubious. Just this year in late October, one of these systems placed in a Baltimore school, detected an individual carrying a firearm.[2] Police arrived and identified the suspect as sixteen-year-old Taki Allen.[3] However, after the police drew their weapons and handcuffed young Allen, they discovered the “firearm” was actually just an empty bag of Doritos.[4]

Despite the fact that AI technology of this level of sophistication is relatively new, it has sprouted into a multimillion-dollar industry in just a few years. But despite years of development, mishaps like these still occur. This article will explore how these systems work, why they malfunction, ways consumers can avoid these malfunctions, and potential liability for when they malfunction.

Continue reading

The Collapse of Capability Theory: Ambriz, Popa, and the Future of Article III Standing in AI Privacy Cases

The Collapse of Capability Theory: Ambriz, Popa, and the Future of Article III Standing in AI Privacy Cases

Caroline Aiello

 

Introduction

In February 2025, the Northern District of California denied Google’s motion to dismiss in a class action lawsuit that claimed Google’s artificial intelligence (“AI”) tools violated the California Invasion of Privacy Act (“CIPA”) by transcribing phone calls of users.[1] The court in this case, Ambriz v. Google, ruled that Google’s technical “capability” to use customer call data to train its AI models was enough to state a claim under California’s Invasion of Privacy Act, regardless of whether or not Google actually exploited that data.[2] Six months later, the Ninth Circuit took the opposite approach. The later ruling in Popa v. Microsoft held that routine website tracking did not constitute actual harm and the claims were dismissed for lack of Article III standing before reaching the merits.[3]

These two decisions present privacy law with incompatible standards. Ambriz asks what a technology could do with personal data and finds liability in that potential. Popa demands proof of what a technology actually did and requires concrete injury beyond the action itself. The collision between the two theories is inevitable. When a plaintiff sues an AI company under Ambriz’s capability theory, alleging that the defendant’s system has the technical ability to misuse data, and the defendant responds with a Popa-based standing challenge, the courts will face an impossible choice. The capability to cause harm is not the same as harm itself, and if capability cannot satisfy Article III’s concrete injury requirement, then Ambriz’s approach becomes constitutionally unenforceable in federal court. While Popa has not technically overruled Ambriz, the Ninth Circuit will inevitably need to choose which standard to adopt. 

Continue reading

The Privacy Parlay: How Data Mining and Targeted Ads Drive Gambling Addiction

The Privacy Parlay: How Data Mining and Targeted Ads Drive Gambling Addiction

Emily Weisser

 

I. Introduction

In the digital age, the gambler is not just the person placing the bets, they are also the data being wagered on. Every click, swipe, and deposit becomes part of a high-stakes game where the house rarely loses. Much like a parlay bet–where every leg must hit for the gambler to win–the modern gambling industry relies on data collection and targeted advertising to increase the number of returning customers, boosting its own profits while building a predictive framework that treats users as inputs rather than individuals. In this “privacy parlay,” the odds are overwhelmingly in favor of the house– the gambling operator.

The first leg of this parlay is the mining of consumer data, drawn from government-mandated identity verification information and voluntary interactions. Operators combine this data to build comprehensive behavioral profiles. The second leg involves monetizing this data through micro-targeted advertising, designed to exploit psychological vulnerabilities and nudge users toward repeated engagement. The third leg uses these insights to promote repeat play, conflating addiction with ordinary customer loyalty.

Despite the immense power of this system, the current regulatory landscape offers fragmented, inconsistent protection for consumers, leaving critical gaps in oversight. This essay explores data-driven gambling in the post-Professional and Amateur Sports Protection Act (“PASPA”) era and discusses the argument that a unified federal framework is necessary to regulate the privacy parlay–ensuring that data-driven gambling operates transparently, ethically, and in a manner that protects consumers from exploitation.

Continue reading

Spoiled for Choice: AI Regulation Possibilities

Spoiled for Choice: AI Regulation Possibilities

William O’Reilly

 

I. Introduction

Americans want innovation and they believe advancing AI benefits everyone.[1] One solution to encourage this is to roll back regulations.[2] Unfortunately, part and parcel with the innovations are several harms that are likely to result from the inappropriate use of personal and proprietary data and AI decision-making.[3]  There is an option to ignore this potential harm and halt regulations to encourage the spread of personal information.[4] This option is not in the best interest of the country because the U.S. is already losing the innovation race in some respects. Innovation can still occur despite heavy regulations. Virginia is the latest state to pursue the “no regulation” strategy, and it provides a good microcosm to highlight the challenges and advantages of this approach.[5] Virginia’s absence of regulation falls on a spectrum of legislation that demonstrates options for states to protect rights and innovation. As this article discusses further, curbing AI regulation on companies will not advance innovation enough to justify the civil rights violations perpetuated by current AI use.

Continue reading

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Emily Burns

 

Introduction 

The Freedom of Information Act (“FOIA”) is the principal mechanism that allows people to request records held by agencies within the Federal government.[1] In the immigration context, a very common type of FOIA record request is for an A-file, which is a record of every interaction between a non-citizen and an immigration related federal agency.[2]

For people in immigration proceedings, obtaining an A-File allows noncitizens and their lawyers to access information crucial to defending against deportation or gaining immigration benefits, such as entry and exit dates from the United States, copies of past applications submitted to Federal agencies, or statements made to U.S. officials.[3] To obtain an A-File, non-citizens must affirmatively request the file through FOIA from an agency such as United States Citizenship and Immigration Services (USCIS) or Immigration and Customs Enforcement (ICE).[4] However, one carve-out to this process exists, available only in the Ninth Circuit: Dent motions.[5] Dent motions exist due to the case of Dent v. Holder, where the Ninth Circuit recognized that the government violated Sazar Dent’s right to Due Process when it required Mr. Dent to request his A-File through FOIA rather than summarily handing the file over to him when requested in a prior court proceeding.[6]

Continue reading