Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Continue reading

Digitizing the Fourth Amendment: Privacy in the Age of Big Data Policing

Written by Charles E. Volkwein

ABSTRACT

Today’s availability of massive data sets, inexpensive data storage, and sophisticated analytical software has transformed the capabilities of law enforcement and created new forms of “Big Data Policing.” While Big Data Policing may improve the administration of public safety, these methods endanger constitutional protections against warrantless searches and seizures. This Article explores the Fourth Amendment consequences of Big Data Policing in three parts. First, it provides an overview of Fourth Amendment jurisprudence and its evolution in light of new policing technologies. Next, the Article reviews the concept of “Big Data” and examines three forms of Big Data Policing: Predictive Policing Technology (PPT); data collected by third-parties and purchased by law enforcement; and geofence warrants. Finally, the Article concludes with proposed solutions to rebalance the protections afforded by the Fourth Amendment against these new forms of policing.

Continue reading

Life’s Not Fair. Is Life Insurance?

The rapid adoption of artificial intelligence techniques by life insurers poses increased risks of discrimination, and yet, regulators are responding with a potentially unworkable state-by-state patchwork of regulations. Could professional standards provide a faster mechanism for a nationally uniform solution?

By Mark A. Sayre, Class of 2024

Introduction

Among the broad categories of insurance offered in the United States, individual life insurance is unique in a few key respects that make it an attractive candidate for the adoption of artificial intelligence (AI).[1] First, individual life insurance is a voluntary product, meaning that individuals are not required by law to purchase it in any scenario.[2] As a result, in order to attract policyholders, life insurers must convince customers not only to choose their company over other companies but also convince customers to choose their product over other products that might compete for a share of discretionary income (such as the newest gadget or a family vacation). Life insurers can, and do, argue that these competitive pressures provide natural constraints on the industry’s use of practices that the public might view as burdensome, unfair or unethical and that such constraints reduce the need for heavy-handed regulation.[3]

Continue reading