AI, a Watchful Eye: The Less than Stellar Performance of AI Security and the Consequences Thereof

AI, a Watchful Eye: The Less than Stellar Performance of AI Security and the Consequences Thereof

James Hotham

 

The use and abuse of widespread camera surveillance is not a novel fear. For decades, media has explored this concept. However, a new threat has arisen in a new form. It has not taken the form of an oppressive government, a terrorist group, or a supreme artificial intelligence. Rather, it comes from private party security providers. Several security providers have begun to work AI into their  security cameras for the use of threat detection.[1] However, the success of these threat detection models is dubious. Just this year in late October, one of these systems placed in a Baltimore school, detected an individual carrying a firearm.[2] Police arrived and identified the suspect as sixteen-year-old Taki Allen.[3] However, after the police drew their weapons and handcuffed young Allen, they discovered the “firearm” was actually just an empty bag of Doritos.[4]

Despite the fact that AI technology of this level of sophistication is relatively new, it has sprouted into a multimillion-dollar industry in just a few years. But despite years of development, mishaps like these still occur. This article will explore how these systems work, why they malfunction, ways consumers can avoid these malfunctions, and potential liability for when they malfunction.

Continue reading

The Collapse of Capability Theory: Ambriz, Popa, and the Future of Article III Standing in AI Privacy Cases

The Collapse of Capability Theory: Ambriz, Popa, and the Future of Article III Standing in AI Privacy Cases

Caroline Aiello

 

Introduction

In February 2025, the Northern District of California denied Google’s motion to dismiss in a class action lawsuit that claimed Google’s artificial intelligence (“AI”) tools violated the California Invasion of Privacy Act (“CIPA”) by transcribing phone calls of users.[1] The court in this case, Ambriz v. Google, ruled that Google’s technical “capability” to use customer call data to train its AI models was enough to state a claim under California’s Invasion of Privacy Act, regardless of whether or not Google actually exploited that data.[2] Six months later, the Ninth Circuit took the opposite approach. The later ruling in Popa v. Microsoft held that routine website tracking did not constitute actual harm and the claims were dismissed for lack of Article III standing before reaching the merits.[3]

These two decisions present privacy law with incompatible standards. Ambriz asks what a technology could do with personal data and finds liability in that potential. Popa demands proof of what a technology actually did and requires concrete injury beyond the action itself. The collision between the two theories is inevitable. When a plaintiff sues an AI company under Ambriz’s capability theory, alleging that the defendant’s system has the technical ability to misuse data, and the defendant responds with a Popa-based standing challenge, the courts will face an impossible choice. The capability to cause harm is not the same as harm itself, and if capability cannot satisfy Article III’s concrete injury requirement, then Ambriz’s approach becomes constitutionally unenforceable in federal court. While Popa has not technically overruled Ambriz, the Ninth Circuit will inevitably need to choose which standard to adopt. 

Continue reading

“Don’t Reinvent the Wheel, Just Realign It.” How Lessons from the Belmont Report Can Help Govern the Use of AI in Research

“Don’t Reinvent the Wheel, Just Realign It.”Just Realign It.”[1] How Lessons from the Belmont Report Can Help Govern the Use of AI in Research How Lessons from the Belmont Report Can Help Govern the Use of AI in Research

Steven Hammerton

 

Background

Artificial intelligence (AI) is becoming increasingly integrated into many areas of life, including research. However, legislation and regulation lag. Years into the widespread adoption of AI and the United States is still without meaningful guardrails to address the ethical quandaries that stem from the use of AI. Until there is comprehensive legislation, the burden of ensuring ethical training, development, and usage of AI will be on the developers, deployers, and users of AI, such as researchers and research participants. This article will explore three different ethical issues associated with AI and how principles from the Belmont Report can guide researchers and other users of AI in their pursuit of ethical AI.

Continue reading

AI Tracking in Small Town Maine?: Real Life Optimization and Our Expectation of Privacy

AI Tracking in Small Town Maine?: Real Life Optimization and Our Expectation of Privacy

Viv Daniel

 

I. Introduction

Increasingly, the intangible world of the internet has been likened to physical space – the concept of the “digital town square,” the term “online space,” and the short-lived promise of the metaverse all come to mind – but recent developments beg the question: Are our physical spaces starting to resemble digital life?

This year, Old Town, Maine became the latest Bangor-area community to sign up for Placer.ai’s services through the Greater Bangor Recreation Economy for Rural Communities group, which is part of Eastern Maine Development Corporation.[1] The AI service collects location data from the smartphones of people moving in and out of these communities, alongside information about where these phones were immediately before and after moving through the monitored area.[2] The AI also collects personal data about the smartphone’s owner, including income level and other demographic information.[3]

In 2025, many Americans might expect that their movements from site-to-site online are being tracked, and their data collected along the way. In their real physical lives, even, most Americans put up with a certain degree of tracking and data collection in the form of surveillance cameras, cell-site location information (CSLI), and the like.[4] Still, many people would likely be surprised to find that their local government (or that of their vacation destination) had contracted with a private company to track their movements and income. So, why would a city or town sign up for such a tracking program?

Continue reading

Implications of New School Surveillance Methods on Student Data Privacy, National Security, Electronic Surveillance, and the Fourth Amendment

Implications of New School Surveillance Methods on Student Data Privacy, National Security, Electronic Surveillance, and the Fourth Amendment

By Amanda Peskin, University of Maryland, Francis King Carey School of Law, Class of 2024

Abstract

Since the Covid-19 pandemic, schools have escalated their use of educational technology to improve students’ in-school and at-home learning. Although educational technology has many educational benefits for students, it has serious implications for students’ data privacy rights. Not only does using technology for educational practices allow schools to surveil their students, but it also avails students to data collection by the educational technology companies. This paper discusses the legal background of surveilling and monitoring student activity, provides the implications surveillance has on technology, equity, and self-expression, and offers several policy-based improvements to better protect students’ data privacy.

Continue reading

“You Have the Right to Remain Silent(?)”: An Analysis of Courts’ Inconsistent Treatment of the Various Means to Unlock Phones in Relation to the Right Against Self-Incrimination

“You Have the Right to Remain Silent(?)”: An Analysis of Courts’ Inconsistent Treatment of the Various Means to Unlock Phones in Relation to the Right Against Self-Incrimination

By Thomas E. DeMarco, University of Maryland Francis King Carey School of Law, Class of 2023[*]

Riley and Carpenter are the most recent examples of the Supreme Court confronting new challenges technology presents to its existing doctrines surrounding privacy issues. But while the majority of decisions focus on Fourth Amendment concerns regarding questions of unreasonable searches, far less attention has been given to Fifth Amendment concerns. Specifically, how does the Fifth Amendment’s protections against self-incrimination translate to a suspect’s right to refuse to unlock their device for law enforcement to search and collect evidence from? Additionally, how do courts distinguish between various forms of unlocking devices, from passcodes to facial scans?

Continue reading

The Double-Edged Promise of Cryptocurrency: How Innovation Creates New Vulnerabilities and How Government Oversight Can Reduce Crypto Crime

The Double-Edged Promise of Cryptocurrency: How Innovation Creates New Vulnerabilities and How Government Oversight Can Reduce Crypto Crime

By Jason H. Meuse, University of Maine School of Law, Class of 2023

Abstract

The fallout from the FTX fraud scheme brought the dangers of crypto front-and-center. Not only did FTX perpetrate a massive fraud, but its fall exposed the cryptocurrency exchange to hacking resulting in the theft of over $477 million in crypto assets. This theft is not isolated to FTX; by October 2022, hackers had already stolen over $3 billion. In addition, new organizational structure and technology in the crypto industry has introduced new vulnerabilities. Cryptocurrency exchanges, decentralized exchanges, and cross-chain bridges are prime targets for hackers to both steal and launder crypto assets. Part of the reason these technologies leave assets vulnerable is that they undermine a central premise of crypto: a currency system accountable to users within a closed ecosystem. While the industry has responded by increasing its security standards and procedures, its anti-government attitude has inhibited cooperation with government that could make the crypto marketplace even more secure. Many firms are incorporated outside of U.S. jurisdiction, lightening the compliance burden at the cost of security. However, establishing industry security standards and cooperating with the government can lead to higher security and greater consumer confidence.

Continue reading

Rethinking the Government’s Role in Private Sector Cybersecurity

Rethinking the Government’s Role in Private Sector Cybersecurity

By Devon H. Draker, University of Maine School of Law, class of 2023 [1]

Abstract

Cyber-attacks on the private sector through the theft of trade secrets and ransomware attacks threaten US interests at a federal level by undermining US economic competitiveness and funding groups with interests adverse to those of the US. The federal government can regulate cyberspace under the Commerce Clause, but the current cybersecurity regulatory landscape is ineffective in addressing these harms. It is ineffective because legislation is either bad-actor focused and punishes the proverbial “hacker,” which has no teeth due to jurisdictional reach limitations, or because it attempts to punish the victim-company in hopes of motivating the development of sufficient safeguards. The missing puzzle piece in solving this issue is “intelligence.” Intelligence in military terms is the process of combining information to create an actionable plan that anticipates what the enemy will do based on operational factors. The utility of intelligence in cyberspace is that it provides companies the ability to anticipate not only when they may be attacked based on trends in their sector, but also what methods would likely be used to carry out the attack. There are two ways that cybersecurity intelligence could be achieved. The first approach involves integrating cybersecurity units from the United States Military into the private sector to collect information on attacks and provide intelligence to private sector companies based on this information gathering. This approach also allows the US Military to continue its proficiency in the cyberspace domain, which is a rising concern for US military leaders. The second approach involves expanding the Cybersecurity and Infrastructure Security Agency’s (CISA) regulatory powers to enact mandatory reporting regulations for more than just “critical infrastructure.” Each approach has its own drawbacks, but both offer significant advantages as compared to the current regulatory landscape.

 

Continue reading

Digitizing the Fourth Amendment: Privacy in the Age of Big Data Policing

Written by Charles E. Volkwein

ABSTRACT

Today’s availability of massive data sets, inexpensive data storage, and sophisticated analytical software has transformed the capabilities of law enforcement and created new forms of “Big Data Policing.” While Big Data Policing may improve the administration of public safety, these methods endanger constitutional protections against warrantless searches and seizures. This Article explores the Fourth Amendment consequences of Big Data Policing in three parts. First, it provides an overview of Fourth Amendment jurisprudence and its evolution in light of new policing technologies. Next, the Article reviews the concept of “Big Data” and examines three forms of Big Data Policing: Predictive Policing Technology (PPT); data collected by third-parties and purchased by law enforcement; and geofence warrants. Finally, the Article concludes with proposed solutions to rebalance the protections afforded by the Fourth Amendment against these new forms of policing.

Continue reading

Say “Bonjour” to New Blanket Privacy Regulations?

The FTC Considers Tightening the Leash on the Commercial Data Free-for-All and Loose Data Security Practices in an Effort to Advance Toward a Framework More Akin to the GDPR

By Hannah Grace Babinski, class of 2024

On August 11, 2022, the Federal Trade Commission (FTC) issued an Advance Notice of Proposed Rulemaking (ANPR) concerning possible rulemaking surrounding “commercial surveillance” and “lax data security practices”[1] and established a public forum date of September 8, 2022.[2] The FTC’s specific objective for issuing this ANPR is to obtain public input concerning “whether [the FTC] should implement new trade regulation rules or other regulatory alternatives concerning the ways in which companies (1) collect, aggregate, protect, use, analyze, and retain consumer data, as well as (2) transfer, share, sell, or otherwise monetize that data in ways that are unfair or deceptive.”[3]

Continue reading