Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Another option for prescribing liability that avoids having to prove intent is strict liability. Touro Law Professor Gabriel Weil argues that some AI companies should face strict liability standards, shifting liability to the developers and encouraging responsible designs.[8] This theory differs from standard negligence because a defendant can be liable without being at fault.[9] Under strict products liability theory, if the product causes any foreseeable harm, then the manufacturer can be liable for the damages, regardless of their intent or negligence.[10] Professor Weil notes that this standard would not be appropriate for all AI uses: “a chess-playing program, for instance, does not fit the strict liability requirement of ‘creating a foreseeable and highly significant risk of harm even when reasonable care is exercised.’”[11]Appropriate situations for applying strict liability to AI could be “if their developer ‘knew or should have known that the resulting system would pose a highly significant risk of physical harm, even if reasonable care is exercised in the training and deployment process.’”[12] Examples of such AI use would be a “system capable of synthesizing chemical or biological weapons . . . [or a] system that we know to be misaligned, or that has secret goals it hides from humans (which sounds like sci-fi but has already been created in lab settings), might qualify too.”[13]

Applying strict liability to these forms of AI would put developers on the hook for any damage caused by their systems. This may incentivize companies to use extra caution and implement safety measures to prevent such harms in an effort to reduce incidents.[14] The downside of requiring extensive regulation is the potential for slowing down new technology and progress. If we say that the benefits of AI technology outweigh the costs, then slowing down the process can cause its own harm.

AI seemingly requires additional and/or new regulations to shape it, but traditional products liability rules and standards should still also apply. Privacy Professionals Brenda Leong and Jey Kumarasamy suggest that “at a minimum, vendors should explicitly document clients’ exact specifications and subsequent cooperative input to the AI system and include all relevant contractual controls, disclaimers and specific assignment of liability agreements.”[15] While traditional software and products liability laws still apply, they will likely not be a sufficient match to the risks around AI,

particularly in high-impact applications such as finance, health care, housing and education. Courts are increasingly willing to hold vendors accountable to similar standards as their enterprise customers for harms to end users, and the U.S. Federal Trade Commission has explicitly warned against behaviors — such as exaggerated marketing claims of accurate or unbiased results — that could trigger FTC enforcement as part of its overall vigilant view toward AI regulation.[16]

 2. Current State of AI Law and Regulation

“The hope that AI can be harnessed to help foster fairness and efficiency extends to the work of government too.”[17] The AI Leadership to Enable Accountable Deployment (AI LEAD) Act was introduced to the Senate last year and is in the process of being amended.[18] This act would establish a Chief AI Officers Council and other AI intelligence officers.[19] The “government cannot govern AI if [they] don’t understand [it]”[20] says Stanford Professor Daniel Ho. The AI LEAD Act would create requirements to “help ensure the government is able to properly use and govern the technology.”[21]

We currently have a fairly expansive Executive Order governing AI.[22] On October 30th, 2023, President Biden issued EO 14110 calling for “Safe, Secure, and Trustworthy development and Use of Artificial Intelligence.”[23] This EO establishes numerous standards for AI safety and security, along with provisions that focus on protecting Americans’ privacy.[24] One such requirement is that “developers of the most powerful AI systems must notify the federal government when training the model, and must share the results of all red-team safety tests . . . ensur[ing] AI systems are safe, secure, and trustworthy before companies make them public.”[25] The Order also promulgates guidelines for agencies’ use of AI to address how irresponsible use of AI can further discrimination, bias, and other unjust systems.[26]

Following on the heels of EO 14110, CISA created a five-step plan to “promote the beneficial uses of AI to enhance cybersecurity capabilities, ensure AI systems are protected from cyber-based threats, and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”[27] The Federal Trade Commission (FTC) claims that “many of the decisions concerning the use and implementation of AI in the consumer context can be regulated by Section (5)(a) of the FTC Act, which provides that ‘unfair or deceptive acts or practices in or affecting commerce . . . are . . . declared unlawful.’”[28] Therefore, to the extent that AI companies represent or warrant things  about their products, liability could stem from untrue or deceptive misrepresentation.[29]

3. AI Parameters Moving Forward

            AI’s usage will only continue to grow, and professionals are trying to harness its power while being mindful of the positive results it can usher in. One example is Standford Law Professor David Engstrom’s research where he is leading “a multiyear project to advise courts on ‘high-volume’ dockets, including debt, eviction, and family cases [where] technology will be a pivotal part, as will examining how courts can leverage AI.”[30] Currently, the U.S. is relying on numerous sectoral, self-regulatory approaches to AI, but the landscape continues to develop in a dynamic manner with “a sweeping White House executive order, private sector commitments around cutting-edge frontier models, regulatory guidance”[31] and numerous agency guidelines, best practices, and rules. The legislative and regulatory focus is “on the allegedly improper use of protected data (for example, personal or copyrighted data) to develop models and improve products and services.”[32] Congress and the Courts should try to balance the best progressive results with this ever-changing technology while also encouraging precautions, protecting privacy, applying and enforcing products liability law, and ensuring there are appropriate recourse options.

[1] Tomas Weber, Artificial Intelligence and the Law, Stanford L. Sch.(Dec. 5, 2023), https://law.stanford.edu/stanford-lawyer/articles/artificial-intelligence-and-the-law/.

[2] See id.

[3] Id.

[4] Id.

[5] Defamation, Legal Information Institute Cornell Law School, https://www.law.cornell.edu/wex/defamation

[6] Weber, supra note 1.  

[7] Artificial Intelligence, Cybersecurity & Infrastructure Sec. Agency,https://www.cisa.gov/ai#:~:text=As%20noted%20in%20the%20landmark,for%20critical%20infrastructure%20security%20and (last visited Mar. 10, 2024).

[8] Dylan Matthews, Can the Courts Save us From Dangerous AI?, Vox (Feb. 7, 2024), https://www.vox.com/future-perfect/2024/2/7/24062374/ai-openai-anthropic-deepmind-legal-liability-gabriel-weil.

[9] Id.

[10] Strict Liability, Legal Information Institute Cornell Law School, https://www.law.cornell.edu/wex/strict_liability

[11] Matthews, supra note 7.

[12] Id.

[13] Id.

[14] See id.

[15] Brenda Leong & Jey Kumarasamy, Third-Party liability and Product Liability for AI systems, IAPP (Jul. 26, 2023), https://iapp.org/news/a/third-party-liability-and-product-liability-for-ai-systems/.

[16] Id.

[17] Weber, supra note 1.

[18] Id.

[19] Id.

[20]  Id.

[21] Id.

[22] Exec. Order 14,110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 Fed. Reg. 75,191 (Nov. 1, 2023).

[23] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White House (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[24] Id.

[25] Id.

[26] See id.

[27] Artificial Intelligence, supra note 7.

[28] Ryan E. Long, Artificial Intelligence Liability: the Rules are Changing, LSE (Aug. 16, 2021), https://blogs.lse.ac.uk/businessreview/2021/08/16/artificial-intelligence-liability-the-rules-are-changing/.

[29] See id.

[30] Weber, supra note 1.

[31] Artificial Intelligence Review and Outlook – 2024, Gibson Dunn(Feb. 8, 2024), https://www.gibsondunn.com/artificial-intelligence-review-and-outlook-2024/.

[32] Id.