Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Continue reading

The Application of Information Privacy Frameworks in Cybersecurity

The Application of Information Privacy Frameworks in Cybersecurity

By Dale Dunn

PDF LINK

INTRODUCTION

The frequency of cyberattacks is increasing exponentially, with human-driven ransomware attacks more than doubling in number between September 2022 and June 2023 alone.[1] In a vast majority of attacks, threat actors seek to penetrate legitimate accounts of their target’s employees or the accounts of their target’s third-party service provider’s employees.[2] In the remaining instances, threat actors exploit existing vulnerabilities to penetrate their target’s systems.[3] Combatting these attacks requires a holistic, whole-of-society approach.

Current technology and security norms leave room for improvement. The Cybersecurity and Infrastructure Security Agency (CISA) describes current technology products as generally being vulnerable by design (“VbD”).[4] To help companies produce secure products instead, CISA, in combination with its partners, has proposed the Secure by Design (“SBD”) framework.[5] However, SBD will not be sufficient on its own to prevent threat actors from succeeding. The quantity and availability of personal information available today enables threat actors to efficiently bypass security measures.

The Fair Information Practice Principles (“FIPPs”) and the Privacy by Design (“PBD”) framework should be implemented in addition to SBD to reduce both the likelihood and the potential harm of successful cybersecurity attacks. The FIPPs are procedures for handling data that mitigate the risk of misuse.[6] PBD is a supplementary method of mitigating the potential harm that can result from data in a system or product.[7] While both the FIPPs and PBD were developed for use with personal information, they can and should apply beyond that specific context as a way of thinking about all data used and protected by information systems.

This paper is arranged in five sections. The first section describes the requirement of reasonable security. The second section then explains the Secure by Design framework. Section three, the FIPPs and PBD. Section four provides a case study in which social engineering is utilized by a threat actor to conduct cyberattacks. Finally, section five recommends measures companies and other organizations should take to implement the SBD, FIPPs, and the PBD. In sum, this paper will show information privacy principles and methodologies that should be implemented to reduce the risk of cybersecurity attacks.

Continue reading

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

PDF Link

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

By Benjamin Clark

Introduction and Overview

Artificial intelligence has captivated the current interest of the general public and academics alike, bringing closer attention to previously unexplored aspects of these algorithms, such as how they have been implemented into critical infrastructure, ways they can be secured through technical defensive measures, and how they can best be regulated to reduce risk of harm. This paper will discuss vulnerabilities common to artificial intelligence systems used in clinical healthcare and how bad actors exploit them before weighing the merits of current regulatory frameworks proposed by the U.S. and other nations for how they address the cybersecurity threats of these systems.

Primarily, artificial intelligence systems used in clinical research and healthcare settings involve either machine learning or deep learning algorithms.[1] Machine learning algorithms automatically learn and improve themselves without needing to be specifically programmed for each intended function. [2] However, these algorithms require that input data be pre-labeled by programmers to train algorithms to associate input features and best predict the labels for output, which involves some degree of human intervention.[3] The presence of humans in this process is referred to as “supervised machine learning” and is most often observed in systems used for diagnostics and medical imaging, in which physicians set markers for specific diagnoses as the labels and algorithms are able to categorize an image as a diagnosis based off the image’s characteristics.[4] Similarly, deep learning is a subset of machine learning characterized by its “neural network” structure in which input data is transmitted through an algorithm through input, output, and “hidden” layers to identify patterns in data.[5] Deep learning algorithms differ from those that utilize machine learning in that they require no human intervention after being trained; instead, deep learning algorithms process unlabeled data by determining what input is most important to create its own labels.[6]

Continue reading

Surveilled in Broad Daylight: How Electronic Monitoring is Eroding Privacy Rights for Thousands of People in Criminal and Civil Immigration Proceedings

Surveilled in Broad Daylight: How Electronic Monitoring is Eroding Privacy Rights for Thousands of People in Criminal and Civil Immigration Proceedings

By Emily Burns   

What is electronic monitoring

Electronic monitoring is a digital surveillance mechanism that tracks a person’s movements and activities[1] by using radio transmitters, ankle monitors, or cellphone apps.[2] Governmental surveillance through electronic monitoring, used by every state in the U.S. and the Federal Government, functions as a nearly omnipotent presence for people in two particular settings: people in criminal proceedings and/or civil immigration proceedings.[3]

In 2021, approximately 254,700 adults were subject to electronic monitoring in the United States, with 150,700 of them in the criminal system and 103,900 in the civil immigration system.[4] While people outside of these systems hold substantial privacy rights against unreasonable governmental searches and seizures of digital materials through Fourth Amendment jurisprudence, the rise of electronic monitoring forces people to “consent” to electronic monitoring in exchange for the ability to be outside of a jail cell. [5]

Within the criminal context, this means that as a condition of supervision, such as parole or probation, certain defendants must consent to “continuous suspicion-less searches” of their electronics and data such as e-mail, texts, social media, and literally any other information on their devices.[6]

In the civil immigration context, like asylum seekers, immigrants can face a similar “choice:” remain in detention or be released with electronic monitoring.[7]  For immigrants in ICE detention on an immigration bond, this “choice” reads more like a plot device on an episode of Black Mirror than an effect of a chosen DHS policy. While people detained on bond in the criminal system are commonly allowed to be released when they pay at least 10 percent of the bond, ICE requires immigrants to pay the full amount of the bond, which is mandated by statute at a minimum $1,500 with a national average of $9,274.[8] If the bond is not paid, immigrants can spend months or even years in ICE detention.[9] Because many bail bond companies view immigration bonds to hold more risk of non-payment,  companies either charge extremely high interest rates on the bond contracts that immigrants pay or, as in the case of the company Libre by Nexus, ensure the bond by putting an ankle monitor on the bond seeker.[10] For people who must give up their bodily autonomy in order to be released from physical detention by “allowing” a private company to strap an ankle monitor to their body, paying for this indignity comes at a substantial economic cost that many cannot afford: Libre by Nexus charges $420 per month for using the ankle monitor, which is in addition to the actual repayment costs of the bond amount.[11] [12]

Continue reading

Protecting the Biometric Data of Minor Students

Protecting the Biometric Data of Minor Students

by Devin Forbush

 

Introduction

At the beginning of this month, in considering topics to comment on and analyze, a glaring issue so close to home presented itself.  In a letter written on January 24, Jamie Selfridge, Principal of Caribou High School, notified parents and guardians of students of an “exciting new development” to be implemented at the school.[1] What is this exciting new development you may ask? It’s the mass collection of biometric data of their student body.[2] For context, biometric data collection is a process to identify an individual’s biological, physical, or behavioral characteristics.[3] This can include the collection of “fingerprints, facial scans, iris scans, palm prints, and hand geometry.”[4]

Presented to parents as a way to enhance accuracy, streamline processes, improve security, and encourage accountability, the identiMetrics software to be deployed at Caribou High School should not be glanced over lightly.[5]While the information around Caribou high school’s plan was limited at the time, aside from the Maine Wire website post and letter sent out to parents & guardians, a brief scan of the identiMetrics website reveals a cost effective, yet in-depth, data collection software that gathers over 2 million data points on students every day, yet touts safety and security measures are implemented throughout.[6] While this brief post will not analyze the identiMetrics software as a whole, it will rather highlight the legal concerns around biometric data collection and make it clear that the software sought to be implemented by Caribou high school takes an opt-out approach to collection and forfeits students’ privacy and sensitive data for the purpose of educational efficiency.

Immediately, I started writing a brief blog post on this topic, recognizing the deep-seated privacy related issues for minors. Yet, the American Civil Liberties Union of Maine beat me to the punch, and on February 13th, set forth a public record request relating to the collection of biometric data to be conducted at Caribou High School due to their concerns.[7] The next day, Caribou High School signaled their intention to abandon their plan.[8] While I was ecstatic with this news, all the work that had been completed on this blog post appeared moot. Yet, not all was lost, as upon further reflection, this topic signaled important considerations. First, information privacy law and the issues related to it are happening in real-time and are changing day-to-day. Second, this topic presents an opportunity to inform individuals in our small state of the nonexistent protections for the biometric data of minors, and adults alike. Third, this reflection can sets forth proposals that all academic institutions should embrace before they consider collecting highly sensitive information of minor students.

This brief commentary proposes that (1) Academic institutions should not collect the biometric data of their students due to the gaps in legal protection within Federal and State Law; (2) If schools decide to proceed with biometric data collection, they must provide written notice to data subjects, parents, and legal guardians specifying (i) each biometric identifier being collected, (ii) the purpose of collection, (iii) the length of time that data will be used and stored, and (iv) the positive rights that parents, legal guardians, and data subjects maintain (e.g., their right to deletion, withdraw consent, object to processing, portability and access, etc.); and (3) Obtain explicit consent, recorded in written or electronic form, acquired in a free and transparent manner.

Continue reading

The Varying Scope of the Trade Secret Exception

The Varying Scope of the Trade Secret Exception

By William J. O’Reilly

 

Introduction

            Each of the three state data privacy acts taking effect in 2023 carve out an exception for data that can be considered a “trade secret”.[1]> At first blush any exception raises red flags, but this one may have a big enough impact to justify that trepidation. Many businesses could claim that collecting and making inferences about private data is their “trade”, making them exempt from a citizen seeking to exercise their rights. Further, Data Brokers—who should be the most limited by these laws—likely fit neatly into this exception. While the exact scope of the trade secret exception varies by state, past statutes and case law indicate the trade secret exception will fulfil privacy advocates’ fear. However, this can be an opportunity for judiciaries to change and protect citizen rights by interpreting such an exception narrowly, consistent with the respective legislature’s purpose. This narrow interpretation is necessary for the full protection of privacy rights.

Continue reading

The Hidden Kraken: Submarine Internet Cables and Privacy Protections

PDF Link

The Hidden Kraken: Submarine Internet Cables and Privacy Protections

By Christopher Guay

  1. Introduction

Beyond the existential dread associated with the greatest depths of the oceans, there rests one of the most important components to our modern civilization. No, it’s not the eldritch horrors of the deep, it’s instead the backbone of the internet. Underwater sea cables represent over “95 percent” of international communications traffic.[1] Underwater sea cables are key to how our modern internet connects the world. These cables allow communications from one country to reach another. Instead of relying upon satellites or radio technology, there are physical fiberoptic lines which connect landmasses of the world. That is why someone in the United States can access a British or German website without any major difficulty. At its core,  submarine internet cables allow enormous amounts of commerce and communications to occur almost instantaneously.[2] Ultimately, the regulatory structure in the United States offers both significant benefits and significant dangers on the issue of information privacy.

There are two major issues related to submarine internet cables, one being related to government use of data and the other having to do with corporate use of data. On the first issue, the United States has accessed and surveilled these submarine internet cables.[3] On the second issue, in the United States, there does not appear to be any regulations stopping submarine cable operators from monetizing the information that goes through their cables. This results from a lack of a comprehensive set of privacy regulations similar to the General Data Protection Regulation (GDPR) in the European Union[4] or California’s California Consumer Privacy Act (CCPA/CPRA).[5] The lack of comprehensive privacy regulations allow companies and the government to collect vast amounts of data.[6] Advertising is big business, with a lot of money involved.[7] The global digital advertising industry is estimated to have $438 billion in revenue in 2021.[8]

Continue reading

U.S. v. Google LLC: An overview of the landmark antitrust case and its impact on consumer privacy, A.I., and the future of the internet.

U.S. v. Google LLC: An overview of the landmark antitrust case and its impact on consumer privacy, A.I., and the future of the internet.

By William Simpson

 

I. Intro

The ongoing antitrust case against Google alleging anticompetitive conduct relating to the company’s search engine could, in the near term, result in a breakup of the company or, alternatively, indicate that existing antitrust law is ill-suited to engage outsize market shares in the digital economy.[1] On a broader scale, this case could have major effects on consumer privacy, A.I., and the character of the internet going forward. The consequences could be, in a word, enormous.

 

II. Background

 

In October 2020, the Department of Justice (DOJ) filed a complaint against Google, alleging that Google violated the Sherman Antitrust Act[2] when it:

  • Entered into exclusivity agreements that forbid preinstallation of any competing search service;
  • Entered into tying arrangements that force preinstallation of its search applications in prime locations on mobile devices and make them undeletable;
  • Entered into long-term agreements with Apple that require Google to be the default general search engine on Apple’s popular Safari browser and other Apple search tools; and
  • Generally used monopoly profits to buy preferential treatment for its search engine on devices, web browsers, and other search access points, creating a continuous and self-reinforcing cycle of monopolization.[3]

The DOJ’s complaint concludes that such practices harm competition and consumers, inhibiting innovation where new companies cannot “develop, compete, and discipline Google’s behavior.”[4] In particular, the DOJ argues that Google’s conduct injures American consumers who are subject to Google’s “often-controversial privacy practices.”[5]

In response, Google refutes the DOJ’s argument, deeming the lawsuit “deeply flawed.”[6] “People use Google because they choose to,” says a Google spokesperson, “not because they’re forced to or because they can’t find alternatives.”[7] Challenging the DOJ’s claims, Google asserts that any deals that it entered into are analogous to those a popular cereal brand would enter into for preferential aisle placement.[8]

Continue reading

Privacy in Virtual and Augmented Reality

Privacy in Virtual and Augmented Reality

Devin Forbush, Christopher Guay, & Maggie Shields

A. Introduction

            In this paper, we set out the basics of Augmented and Virtual Reality.  First, we discuss how the technology works and how data is collected.  Second, we analyze what privacy issues arise, and specifically comment on the gravity of privacy concerns that are not contemplated by current laws given the velocity and volume of data that is collected with this technology.  Third, the final section of this paper analyzes how to mitigate these privacy concerns and what regulation of this technology would ideally look like.  Through the past decade, the advent of augmented reality (AR), mixed reality (MR), and virtual reality (VR) has ushered in a new era of human-computer interactivity.  Although the functions of each reality platform vary, the “umbrella term” XR will be used interchangeably to address concerns covering all areas of these emerging technologies.[1]  The gaming community might have initially popularized XR, but now, broad industries and economic sectors seek to impose the new technologies in a variety of contexts: education, healthcare, workplace, and even fitness.[2]

B. Augmented and Virtual Reality Background

Augmented Reality is “an interface that layers digital content on a user’s visual plane.”[3]  It works by overlaying certain images and objects within the users’ current environment.[4]  AR uses a digital layering which superimposes images and objects into their real world environment.[5]  Software developers create AR smartphone applications or products to be worn by users, such as headsets or AR glasses.[6]  In contrast, Virtual Reality seeks to immerse users within an “interactive virtual environment.”[7]  VR seeks to transport the user into a completely new digital environment, or reality where users can interact with, move within, and behave as if they would within the real world.[8]  To enter VR, a user wears a head-mounted device (HMD) which displays a “three-dimensional computer-generated environment.”[9]  Within the environment created, the HMD uses a variety of sensors, cameras, and controls to track and provide sights, sounds, and haptic response to a user’s input.[10]  Mixed reality offers a combination of virtual reality and augmented reality.[11]  In function, mixed reality creates virtual objects superimposed in the real world, and behaves as if they were real objects.[12]

Continue reading

Blackstone’s Acquisition of Ancestry.com

Blackstone’s Acquisition of Ancestry.com

By Zion Mercado

Blackstone is one of the largest investment firms in the world, boasting over $1 trillion in assets under management.[1] In December of 2020, Blackstone acquired Ancestry.com for a total enterprise value of $4.7 billion.[2] Ancestry is a genealogy service that compiles and stores DNA samples from customers and compares them to the DNA samples of individuals whose lineage can be traced back generations to certain parts of the world.[3] Within Ancestry’s privacy statement, Section 7 states that if Ancestry is acquired or transferred, they may share the personal information of its subscribers with the acquiring entity.[4] This provision was brought into controversy in Bridges v. Blackstone by a pair of plaintiffs representing a putative class consisting of anyone who had their DNA and personal information tested and compiled by Ancestry while residing in the State of Illinois.[5] The suit was brought under the Illinois Genetic Information Privacy Act (“GIPA”) which bars a person or company from “disclos[ing] the identity of any person upon whom a genetic test is performed or the results of a genetic test in a manner that permits identification of the subject of the test” without that person’s permission.[6] In addition to barring disclosure, GIPA may also bar third-party disclosure ,[7] which would then create a cause of action under the act against third parties who compel an entity to disclose genetic information such as the information compiled by Ancestry. In Bridges, it is clear from the opinion that there was virtually no evidence that Blackstone in any way compelled Ancestry to disclose genetic information.[8] However, the language of the statute seems to be unclear as to whether third parties who compel a holder of an individual’s genetic information can be held liable under GIPA. What does seem to be clear from the Seventh Circuit’s reading of the statute is that when an entity acquires another entity that holds sensitive personal information or genetic data, the mere acquisition itself is not proof of compelling disclosure within the meaning of the act.[9]

The exact language of GIPA that pertains to potential third party liability states that “[n]o person may disclose or be compelled to disclose [genetic information].”[10] In Bridges, Blackstone contended that the recipient of protected information could not be held liable under GIPA even if they compelled disclosure.[11] The plaintiffs, in their complaint, could not cite to any conduct on the behalf of Blackstone that would satisfy federal pleading standards for stating a claim that Blackstone compelled Ancestry to disclose information covered under GIPA.[12] This led the judge to disregard the broader issue surrounding GIPA’s language brought upon by Blackstone’s argument that an entity who receives genetic information cannot be held liable even if it compels disclosure of such information.[13] This issue is, in essence, one of statutory interpretation. Blackstone would have courts interpret the language reading “no person may . . . be compelled to disclose” as only granting a cause of action against a defendant who discloses genetic information, but only “because they were ‘compelled’ to do so.”[14] However, such an instance is already covered by the first part of the phrase “no person may disclose.”[15] Notably, the Bridges court did not address Blackstone’s interpretation of the statute since the claim failed on the merits, however, the judge writing the opinion did cite a lack of precedent on the matter.[16] I believe that the Illinois legislature did not intend to write a redundancy into the statute, and a more protective reading of the statute would extend liability to a third party who compels disclosure of genetic information. The very meaning of the word “compel” is “to drive or urge forcefully or irresistibly” or “to cause to do or occur by overwhelming pressure.”[17] This is an act that we as people (and hopefully state legislators as well) would presumedly want to limit, especially when what is being compelled is the disclosure of sensitive information, such as the results of a genetic test and the necessary personal information that accompanies the test. Again, in the plaintiff’s complaint, there was no evidence proffered indicating that Blackstone in any way compelled disclosure of genetic information from Ancestry.[18] However, if a case were to arise where such an occurrence did happen, we should hope that courts do not side with Blackstone’s interpretation. Although I agree with the notion that merely acquiring an entity who holds genetic or other sensitive information should not give rise to liability, and a mere recipient of such information should not be held liable when they do not compel the holder’s disclosure, an entity, especially an acquiring entity, should not be shielded from liability when they seek to pressure an entity into disclosing the personal information of individuals who have not consented to such disclosure.

[1] Blackstone’s Second Quarter 2023 Supplemental Financial Data, Blackstone (Jul. 20, 2023), at 16, https://s23.q4cdn.com/714267708/files/doc_financials/2023/q2/Blackstone2Q23 SupplementalFinancialData.pdf.

[2] Blackstone Completes Acquisition of Ancestry, Leading Online Family History Business, for $4.7 Billion, Blackstone (Dec. 4, 2020), https://www.blackstone.com/news/press/blackstone-completes-acquisition-of-ancestry-leading-online-family-history-business-for-4-7-billion/.

[3] Frequently Asked Questions, Ancestry.com, https://www.ancestry.com/c/dna/ancestry-dna-ethnicity-estimate-update?o_iid=110004&o_lid=110004&o_sch=Web+Property&_gl=1*ot1obs*_up*MQ..&gclid=5aadd61f 926315a4ec29b2e4c0d617e8&gclsrc=3p.ds#accordion-ev4Faq (last visited Sep. 8, 2023).

[4] Privacy Statement, Ancestry.com (Jan. 26, 2023), https://www.ancestry.com/c/legal/privacystatement.

[5] Amended Class Action Complaint at 8, Bridges v. Blackstone, No. 21-cv-1091-DWD, 2022 LEXIS (S.D. Ill. Jul. 8, 2022), 2022 WL 2643968, at 2

[6] Ill. Comp. Stat. Ann. 410/30 (LexisNexis 2022).

[7] Id.

[8] See Bridges, 66 F.4th at 689-90.

[9] Id. (“we cannot plausibly infer that a run-of-the-mill corporate acquisition, without more alleged about that transaction, results in a compulsory disclosure”).

[10] 410/30 (LexisNexis 2022).

[11] Bridges, 66 F.4th at 689.

[12] Id. at 690.

[13] Id. at 689.

[14] Brief of the Defendant-Appellee at 41, Bridges v. Blackstone, 66 F.4th 687 (7th Cir. 2023), (No. 22-2486)

[15] 410/30 (LexisNexis 2022).

[16] Bridges, 66 F.4th  at 689 (Scudder, CJ.) (explaining that “[t]he dearth of Illinois precedent examining GIPA makes this inquiry all the more challenging”).

[17] Compel, Merriam-Webster.com, https://www.merriam-webster.com/dictionary/compel (last visited Sep. 9, 2023).

[18] See supra note 11, at 690.