AI, a Watchful Eye: The Less than Stellar Performance of AI Security and the Consequences Thereof

AI, a Watchful Eye: The Less than Stellar Performance of AI Security and the Consequences Thereof

James Hotham

 

The use and abuse of widespread camera surveillance is not a novel fear. For decades, media has explored this concept. However, a new threat has arisen in a new form. It has not taken the form of an oppressive government, a terrorist group, or a supreme artificial intelligence. Rather, it comes from private party security providers. Several security providers have begun to work AI into their  security cameras for the use of threat detection.[1] However, the success of these threat detection models is dubious. Just this year in late October, one of these systems placed in a Baltimore school, detected an individual carrying a firearm.[2] Police arrived and identified the suspect as sixteen-year-old Taki Allen.[3] However, after the police drew their weapons and handcuffed young Allen, they discovered the “firearm” was actually just an empty bag of Doritos.[4]

Despite the fact that AI technology of this level of sophistication is relatively new, it has sprouted into a multimillion-dollar industry in just a few years. But despite years of development, mishaps like these still occur. This article will explore how these systems work, why they malfunction, ways consumers can avoid these malfunctions, and potential liability for when they malfunction.

Continue reading

The Collapse of Capability Theory: Ambriz, Popa, and the Future of Article III Standing in AI Privacy Cases

The Collapse of Capability Theory: Ambriz, Popa, and the Future of Article III Standing in AI Privacy Cases

Caroline Aiello

 

Introduction

In February 2025, the Northern District of California denied Google’s motion to dismiss in a class action lawsuit that claimed Google’s artificial intelligence (“AI”) tools violated the California Invasion of Privacy Act (“CIPA”) by transcribing phone calls of users.[1] The court in this case, Ambriz v. Google, ruled that Google’s technical “capability” to use customer call data to train its AI models was enough to state a claim under California’s Invasion of Privacy Act, regardless of whether or not Google actually exploited that data.[2] Six months later, the Ninth Circuit took the opposite approach. The later ruling in Popa v. Microsoft held that routine website tracking did not constitute actual harm and the claims were dismissed for lack of Article III standing before reaching the merits.[3]

These two decisions present privacy law with incompatible standards. Ambriz asks what a technology could do with personal data and finds liability in that potential. Popa demands proof of what a technology actually did and requires concrete injury beyond the action itself. The collision between the two theories is inevitable. When a plaintiff sues an AI company under Ambriz’s capability theory, alleging that the defendant’s system has the technical ability to misuse data, and the defendant responds with a Popa-based standing challenge, the courts will face an impossible choice. The capability to cause harm is not the same as harm itself, and if capability cannot satisfy Article III’s concrete injury requirement, then Ambriz’s approach becomes constitutionally unenforceable in federal court. While Popa has not technically overruled Ambriz, the Ninth Circuit will inevitably need to choose which standard to adopt. 

Continue reading