Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Continue reading

The Application of Information Privacy Frameworks in Cybersecurity

The Application of Information Privacy Frameworks in Cybersecurity

By Dale Dunn

PDF LINK

INTRODUCTION

The frequency of cyberattacks is increasing exponentially, with human-driven ransomware attacks more than doubling in number between September 2022 and June 2023 alone.[1] In a vast majority of attacks, threat actors seek to penetrate legitimate accounts of their target’s employees or the accounts of their target’s third-party service provider’s employees.[2] In the remaining instances, threat actors exploit existing vulnerabilities to penetrate their target’s systems.[3] Combatting these attacks requires a holistic, whole-of-society approach.

Current technology and security norms leave room for improvement. The Cybersecurity and Infrastructure Security Agency (CISA) describes current technology products as generally being vulnerable by design (“VbD”).[4] To help companies produce secure products instead, CISA, in combination with its partners, has proposed the Secure by Design (“SBD”) framework.[5] However, SBD will not be sufficient on its own to prevent threat actors from succeeding. The quantity and availability of personal information available today enables threat actors to efficiently bypass security measures.

The Fair Information Practice Principles (“FIPPs”) and the Privacy by Design (“PBD”) framework should be implemented in addition to SBD to reduce both the likelihood and the potential harm of successful cybersecurity attacks. The FIPPs are procedures for handling data that mitigate the risk of misuse.[6] PBD is a supplementary method of mitigating the potential harm that can result from data in a system or product.[7] While both the FIPPs and PBD were developed for use with personal information, they can and should apply beyond that specific context as a way of thinking about all data used and protected by information systems.

This paper is arranged in five sections. The first section describes the requirement of reasonable security. The second section then explains the Secure by Design framework. Section three, the FIPPs and PBD. Section four provides a case study in which social engineering is utilized by a threat actor to conduct cyberattacks. Finally, section five recommends measures companies and other organizations should take to implement the SBD, FIPPs, and the PBD. In sum, this paper will show information privacy principles and methodologies that should be implemented to reduce the risk of cybersecurity attacks.

Continue reading

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

PDF Link

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

By Benjamin Clark

Introduction and Overview

Artificial intelligence has captivated the current interest of the general public and academics alike, bringing closer attention to previously unexplored aspects of these algorithms, such as how they have been implemented into critical infrastructure, ways they can be secured through technical defensive measures, and how they can best be regulated to reduce risk of harm. This paper will discuss vulnerabilities common to artificial intelligence systems used in clinical healthcare and how bad actors exploit them before weighing the merits of current regulatory frameworks proposed by the U.S. and other nations for how they address the cybersecurity threats of these systems.

Primarily, artificial intelligence systems used in clinical research and healthcare settings involve either machine learning or deep learning algorithms.[1] Machine learning algorithms automatically learn and improve themselves without needing to be specifically programmed for each intended function. [2] However, these algorithms require that input data be pre-labeled by programmers to train algorithms to associate input features and best predict the labels for output, which involves some degree of human intervention.[3] The presence of humans in this process is referred to as “supervised machine learning” and is most often observed in systems used for diagnostics and medical imaging, in which physicians set markers for specific diagnoses as the labels and algorithms are able to categorize an image as a diagnosis based off the image’s characteristics.[4] Similarly, deep learning is a subset of machine learning characterized by its “neural network” structure in which input data is transmitted through an algorithm through input, output, and “hidden” layers to identify patterns in data.[5] Deep learning algorithms differ from those that utilize machine learning in that they require no human intervention after being trained; instead, deep learning algorithms process unlabeled data by determining what input is most important to create its own labels.[6]

Continue reading

Balanced Scrutiny – The Necessity of Adopting a New Standard to Combat the Rising Harm of Invasive Technology

Balanced Scrutiny – The Necessity of Adopting a New Standard to Combat the Rising Harm of Invasive Technology

By Roosevelt S. Bishop, University of Maine School of Law, Class of 2023

ABSTRACT

The current First Amendment jurisprudence of strict scrutiny is wholly insufficient in fostering a healthy legal landscape regarding the freedom of speech in cyberspace. Technology is outpacing the legislative action to address these increasing harms that are prevalent in a society that practically lives online. Consequently, if we, as a society, are to effectively begin addressing the growing danger of the practically protected “expression” of Privacy Invaders, we need to first explore the possibility of a new tier of scrutiny; we need balance. This blueprint for balanced scrutiny will begin by highlighting the harms suffered unequally through the invasion of Intimate Privacy, a term originally coined by premiere privacy scholar Danielle Keats Citron. It will then touch on the historical standing and flexibility of the First Amendment. After edifying how cyber harassment and the First Amendment intersect, this study will conclude by proposing a new standard of judicial review to be utilized when addressing laws targeting cyber expression.  Continue reading

Rethinking the Government’s Role in Private Sector Cybersecurity

Rethinking the Government’s Role in Private Sector Cybersecurity

By Devon H. Draker, University of Maine School of Law, class of 2023 [1]

Abstract

Cyber-attacks on the private sector through the theft of trade secrets and ransomware attacks threaten US interests at a federal level by undermining US economic competitiveness and funding groups with interests adverse to those of the US. The federal government can regulate cyberspace under the Commerce Clause, but the current cybersecurity regulatory landscape is ineffective in addressing these harms. It is ineffective because legislation is either bad-actor focused and punishes the proverbial “hacker,” which has no teeth due to jurisdictional reach limitations, or because it attempts to punish the victim-company in hopes of motivating the development of sufficient safeguards. The missing puzzle piece in solving this issue is “intelligence.” Intelligence in military terms is the process of combining information to create an actionable plan that anticipates what the enemy will do based on operational factors. The utility of intelligence in cyberspace is that it provides companies the ability to anticipate not only when they may be attacked based on trends in their sector, but also what methods would likely be used to carry out the attack. There are two ways that cybersecurity intelligence could be achieved. The first approach involves integrating cybersecurity units from the United States Military into the private sector to collect information on attacks and provide intelligence to private sector companies based on this information gathering. This approach also allows the US Military to continue its proficiency in the cyberspace domain, which is a rising concern for US military leaders. The second approach involves expanding the Cybersecurity and Infrastructure Security Agency’s (CISA) regulatory powers to enact mandatory reporting regulations for more than just “critical infrastructure.” Each approach has its own drawbacks, but both offer significant advantages as compared to the current regulatory landscape.

 

Continue reading