Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Continue reading

The Application of Information Privacy Frameworks in Cybersecurity

The Application of Information Privacy Frameworks in Cybersecurity

By Dale Dunn

PDF LINK

INTRODUCTION

The frequency of cyberattacks is increasing exponentially, with human-driven ransomware attacks more than doubling in number between September 2022 and June 2023 alone.[1] In a vast majority of attacks, threat actors seek to penetrate legitimate accounts of their target’s employees or the accounts of their target’s third-party service provider’s employees.[2] In the remaining instances, threat actors exploit existing vulnerabilities to penetrate their target’s systems.[3] Combatting these attacks requires a holistic, whole-of-society approach.

Current technology and security norms leave room for improvement. The Cybersecurity and Infrastructure Security Agency (CISA) describes current technology products as generally being vulnerable by design (“VbD”).[4] To help companies produce secure products instead, CISA, in combination with its partners, has proposed the Secure by Design (“SBD”) framework.[5] However, SBD will not be sufficient on its own to prevent threat actors from succeeding. The quantity and availability of personal information available today enables threat actors to efficiently bypass security measures.

The Fair Information Practice Principles (“FIPPs”) and the Privacy by Design (“PBD”) framework should be implemented in addition to SBD to reduce both the likelihood and the potential harm of successful cybersecurity attacks. The FIPPs are procedures for handling data that mitigate the risk of misuse.[6] PBD is a supplementary method of mitigating the potential harm that can result from data in a system or product.[7] While both the FIPPs and PBD were developed for use with personal information, they can and should apply beyond that specific context as a way of thinking about all data used and protected by information systems.

This paper is arranged in five sections. The first section describes the requirement of reasonable security. The second section then explains the Secure by Design framework. Section three, the FIPPs and PBD. Section four provides a case study in which social engineering is utilized by a threat actor to conduct cyberattacks. Finally, section five recommends measures companies and other organizations should take to implement the SBD, FIPPs, and the PBD. In sum, this paper will show information privacy principles and methodologies that should be implemented to reduce the risk of cybersecurity attacks.

Continue reading

The Varying Scope of the Trade Secret Exception

The Varying Scope of the Trade Secret Exception

By William J. O’Reilly

 

Introduction

            Each of the three state data privacy acts taking effect in 2023 carve out an exception for data that can be considered a “trade secret”.[1]> At first blush any exception raises red flags, but this one may have a big enough impact to justify that trepidation. Many businesses could claim that collecting and making inferences about private data is their “trade”, making them exempt from a citizen seeking to exercise their rights. Further, Data Brokers—who should be the most limited by these laws—likely fit neatly into this exception. While the exact scope of the trade secret exception varies by state, past statutes and case law indicate the trade secret exception will fulfil privacy advocates’ fear. However, this can be an opportunity for judiciaries to change and protect citizen rights by interpreting such an exception narrowly, consistent with the respective legislature’s purpose. This narrow interpretation is necessary for the full protection of privacy rights.

Continue reading

The Hidden Kraken: Submarine Internet Cables and Privacy Protections

PDF Link

The Hidden Kraken: Submarine Internet Cables and Privacy Protections

By Christopher Guay

  1. Introduction

Beyond the existential dread associated with the greatest depths of the oceans, there rests one of the most important components to our modern civilization. No, it’s not the eldritch horrors of the deep, it’s instead the backbone of the internet. Underwater sea cables represent over “95 percent” of international communications traffic.[1] Underwater sea cables are key to how our modern internet connects the world. These cables allow communications from one country to reach another. Instead of relying upon satellites or radio technology, there are physical fiberoptic lines which connect landmasses of the world. That is why someone in the United States can access a British or German website without any major difficulty. At its core,  submarine internet cables allow enormous amounts of commerce and communications to occur almost instantaneously.[2] Ultimately, the regulatory structure in the United States offers both significant benefits and significant dangers on the issue of information privacy.

There are two major issues related to submarine internet cables, one being related to government use of data and the other having to do with corporate use of data. On the first issue, the United States has accessed and surveilled these submarine internet cables.[3] On the second issue, in the United States, there does not appear to be any regulations stopping submarine cable operators from monetizing the information that goes through their cables. This results from a lack of a comprehensive set of privacy regulations similar to the General Data Protection Regulation (GDPR) in the European Union[4] or California’s California Consumer Privacy Act (CCPA/CPRA).[5] The lack of comprehensive privacy regulations allow companies and the government to collect vast amounts of data.[6] Advertising is big business, with a lot of money involved.[7] The global digital advertising industry is estimated to have $438 billion in revenue in 2021.[8]

Continue reading

Generative AI Algorithms: The Fine Line Between Speech and Section 230 Immunity

Generative AI Algorithms: The Fine Line Between Speech and Section 230 Immunity

 By Hannah G. Babinski

ABSTRACT

Russian-American writer and philosopher Ayn Rand once observed, “No speech is ever considered, but only the speaker. It’s so much easier to pass judgment on a man than on an idea.”[1] But what if the speaker is not a man, woman, or a human at all? Concepts of speech and identities of speakers have been the focal points of various court cases and debates in recent years. The Supreme Court and various district courts have faced complex and first-of-their-kind questions concerning emerging technologies, namely algorithms and recommendations, and contemplated whether their outputs constitute speech on behalf of an Internet service provider (“Internet platform”) that would not be covered by Section 230 of the Communications Decency Act (“Section 230”).  In this piece, I will examine some of the issues arising from the questions posed by Justice Gorsuch in Gonzalez v. Google, LLC, namely whether generative AI algorithms and their relative outputs constitute speech that is not immunized under Section 230. I will provide an overview of the technology behind generative AI algorithms and then examine the statutory language and interpretation of Section 230, applying that language and interpretive case law to generative AI. Finally, I will provide demonstrative comparisons between generative AI technology and human content creation and foundational Copyright Law concepts to illustrate how generative AI technologies and algorithmic outputs are akin to unique, standalone products that extend beyond the protections of Section 230.

 

Continue Reading 

Digitizing the Fourth Amendment: Privacy in the Age of Big Data Policing

Written by Charles E. Volkwein

ABSTRACT

Today’s availability of massive data sets, inexpensive data storage, and sophisticated analytical software has transformed the capabilities of law enforcement and created new forms of “Big Data Policing.” While Big Data Policing may improve the administration of public safety, these methods endanger constitutional protections against warrantless searches and seizures. This Article explores the Fourth Amendment consequences of Big Data Policing in three parts. First, it provides an overview of Fourth Amendment jurisprudence and its evolution in light of new policing technologies. Next, the Article reviews the concept of “Big Data” and examines three forms of Big Data Policing: Predictive Policing Technology (PPT); data collected by third-parties and purchased by law enforcement; and geofence warrants. Finally, the Article concludes with proposed solutions to rebalance the protections afforded by the Fourth Amendment against these new forms of policing.

Continue reading

Revenge Porn: The Result of a Lack of Privacy in an Internet-Based Society

Comment

By Shelbie Marie Mora, Class of 2023

I. Introduction

 Nonconsensual pornography, also referred to as revenge porn, is “the distribution of sexual or pornographic images of individuals without their consent.”[1] Forty-six U.S. states, the District of Columbia, and the U.S. territory of Puerto Rico have adopted revenge porn laws. However, there is no federal law in place that prohibits revenge porn. Several countries around the world have chosen to adopt revenge porn statutes to protect individuals’ privacy rights and prevent emotional and financial harm. Revenge porn is primarily a large issue for women given that they are overwhelmingly the target of it.[2] Major ramifications can amount to victims who have had their intimate images posted online without their consent.

In this paper, I will discuss the rise of revenge porn websites, examine Texas and Vermont’s revenge porn statutes, review case law for each state, and analyze the detriments that the holdings pose to victims of revenge porn. I will next examine Australia, Puerto Rico, and Canada’s revenge porn laws and the penalties imposed for offenders. Lastly, I will assess a failed proposed federal revenge porn law in the United States, discuss where the U.S. falls short on federal legislation, and propose remedies to help protect the privacy of individuals. The United States falls short in revenge porn legislation and must pass a federal law to promote and protect the privacy of Americans and deter this crime.

Continue reading