LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy

LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy

Steve Hammerton

 

I. Introduction

There’s a lethal autonomous elephant in the room, and it’s only minimally regulated by DoD Directive 3000.09 (“DODD 3000.09”).[1] Under that directive, lethal autonomous weapon systems (LAWS) are said to be “weapon system[s] that, once activated, can select and engage targets without further intervention by an operator.”[2] In contrast to other nations who have called for an outright ban on such systems, the United States has resisted.[3] Instead, the Department of Defense (“DoD”) has required that LAWS, like all other weapons systems, “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[4] A quick read of this policy would suggest that it requires a human-in-the-loop. However, a more exacting analysis of the language reveals that it only requires “human judgment over the use of force” which only seems to refer to broad themes of lethality like when and where it will be deployed, but not against whom. The directive also refers to an inchoate review process that does not spell out a clear framework for assessing the efficacy and safety of LAWS.[5] Without a clearer statement on the “appropriate levels of human judgment,” the lack of distinction in targeting conflicts with the two core jus in bello principles, distinction and proportionality.[6]

At the same time, LAWS may offer a comparative advantage over human trigger pullers. Canadian think-tank Centre for International Governance Innovation suggests that LAWS “may be able to assess a target’s legitimacy and make decisions faster and with more accuracy and objectivity than fallible human actors could.”[7] Simply put, LAWS could reduce unintended errors or deliberate unlawful killings. Indeed, technology-assisted precision weapons have already reduced collateral damage in armed conflicts.[8] Recent conflicts have been marked by an increased use of autonomous and AI-assisted weaponry, though it is too early to say whether the use of these weapons has identifiably reduced unintended civilian casualties.[9] With the increasing shift to LAWS and other AI-assisted weapons, it seems unrealistic to expect an outright ban. Consequently, the United States and its international partners should seek to preserve distinction and proportionality through a meaningful and complex review, such as a risk-benefit analysis, that recognizes the inherent dangers of using LAWS but appreciates the potential for harm reduction.

Continue reading

The Growing Dependency on AI in Academia

The Growing Dependency on AI in Academia

By: Raaid Bakridi CIPP/US

I. Introduction

In the 21st century, Artificial Intelligence (“AI”) has become an integral part of daily life. From virtual assistants like Siri and Alexa to machine learning algorithms powering recommendation systems,[1] AI is undeniably everywhere;[2] increasingly, it is becoming normalized in daily life.  As U.S. Vice President JD Vance puts it, AI presents an “extraordinary prospect of a new industrial revolution, one on par with the invention of the steam engine.”[3]

AI has also made significant strides in education and academia, offering tools that assist students with research, outlining, essay writing, and even solving complex mathematical and technical problems.[4] However, this convenience comes at a cost. An analysis of AI tutors highlights their potential to enhance education while also raising concerns about overreliance on technology.[5] Rather than using AI as a supplement, many students rely on it to complete their work for them while still receiving credit, which poses challenges to academic integrity and the role of AI in learning.[6] This growing dependence raises concerns about its impact on creativity, critical thinking, overall academic performance, and long-term career prospects. Students are becoming more dependent on AI for their schoolwork, and the potential dangers of this dependency raises significant concerns and implications for their future.[7] If students continue to let AI think for them, the future of our nation will face extreme challenges.

Continue reading

A.I., Facial Recognition, and the New Frontier of Housing Inequality

A.I., Facial Recognition, and the New Frontier of Housing Inequality

By: Caroline Aiello

 

Introduction

“As soon as Ms. Pondexter-Moore steps outside her home, she knows she is being watched.”[1] Schyla Pondexter-Moore is a D.C. resident, and has been living in public housing for over a decade.[2] In 2022, she sued the D.C. Housing Authority for violating her right to privacy, when they forcibly installed advanced surveillance systems in her neighborhood, denied her access to information about the systems, and jailed her overnight while cameras capable of peering into her living room and bedroom were mounted.[3] Schyla is one of over a million public housing residents in the United States.[4] In order to maintain security at these housing complexes, resource-strapped landlords are adopting “landlord tech” to meet their security obligations.[5] Concern for the safety of public housing residents is legitimate and pressing. However, advanced surveillance systems using new features like artificial intelligence are over-surveilling and under-protecting the people they monitor.[6] As jurisdictions in the U.S. and internationally evaluate these systems, key questions emerge about how to balance technological innovation with fundamental principles of respect, dignity, and equity in housing access.

Continue reading

It Will Take A Village To Ensure An Authentic Future For Generation Beta

It Will Take a Village to Ensure an Authentic Future for Generation Beta

By: Susan-Caitlyn Seavey

 

Introduction

One of the many glaring issues that future generations will face is the decline in frequency of in-person human interactions. Today’s technology, especially artificial intelligence (AI) offers unparalleled tools that can be used for the betterment and progression of humanity. For example, new customer service bots, called “conversational agents” are responding to customer inquiries with efficient, personalized and human-like responses, “reshaping how we engage with [ ] companies, [and] creating a world where efficiency meets empathy–or at least an impressively convincing facsimile of it.”[1] AI software is also providing efficiency for individuals through multitasking functions, auto-generated answers to questions, and draft responses to texts and emails, saving the user valuable time. However, this technology can also create unrealistic standards and attractive environments that isolate individuals from their reality. Around the globe, AI technology is becoming more normalized and ubiquitous with software like co-pilot in the workplace and AI robots as companions, friends and romantic partners at home. The rapid development is “particularly concerning given its novelness, the speed and autonomy at which the technology can operate, and the frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed.”[2] We face the challenge of balancing the benefits of efficiency and progression of this technology with the risk of being fully consumed by it, and at the cost of our youngest members of society.

This powerful technology should be used to embrace reality and continue striving for a better world; one that actually exists off of a screen. Jennifer Marsnik summarized this challenge well by contemplating how society can “maintain authenticity, human intelligence and personal connection in a landscape increasingly dominated by algorithms, data and automation.”[3] Young minds are the most susceptible to the unrealistic standards and depictions AI can create. Considering the difficulty even adults can sometimes have when determining whether a visual is real or generated by AI, the young generations with their still-developing minds will evolve in this landscape of not always knowing what is authentic and what is not. If society fails to provide safeguards and implement protections around children and their use of our ever-progressing technology, we could end up with future generations being stuck in a perpetual a cycle of unrealistic expectations and disappointment in the real world, prompting more isolation and leading to the degradation of communities. Preserving authentic relationships and interactions with the real world will require a village: Congress must support new and developing legislation for online safety for children, companies should adopt management frameworks and clearinghouse functions to ensure transparency and accountability in consumer engagement, and parents, teachers, and community leaders must work together to encourage social-emotional learning and in-person interactions in children and teens at home, at school, and in their communities.

Continue reading

Google’s New AI-Powered Customer Service Tools Spark Back-to-Back Class Action Lawsuits

Google’s New AI-Powered Customer Service Tools Spark Back-to-Back Class Action Lawsuits

Zion Mercado 

 

Google recently began rolling out “human-like generative AI powered” customer service tools to help companies enhance their customer service experience.[1] This new service is known as the “Cloud Contact Center AI,” and touts a full package of customer service-based features to help streamline customer service capabilities.[2] Companies who utilize the new service  can create virtual customer service agents, access AI-generated insights providing feedback on customer service interactions, store and manage data on a specialized “Contact Center AI Platform,” and consult with Google’s team of experts on how to improve the AI-integrated systems.[3] However, one key feature that has recently come into controversy is the ability for customers to utilize real-time AI-generated responses to customer inquiries which can then be relayed back to the customer by a live agent.[4] This is known as the “Agent Assist” feature.

Agent Assist operates by “us[ing] machine learning technology to provide suggestions to . . . human agents when they are in a conversation with a customer.”[5] These suggestions are based on the company’s own data and conversations.[6] Functionally, when Agent Assist is in use, there are two parties to the conversation: the live customer service agent, and the customer. The AI program listens in and generates responses in real time for the live customer service agent. Some have argued that this violates California’s wiretapping statute by alleging that the actions of Google’s AI program, which is nothing more than a complex computer program, are attributable to Google itself.[7] Those who have done so have alleged that Google, through its AI-integrated services, has been listening in on people’s conversations without their consent or knowledge.[8]

The wiretapping statute in question is a part of the California Invasion of Privacy Act (“CIPA”), and prohibits the intentional tapping, reading, or any other unauthorized connection, whether physically or otherwise, with any communication being transmitted via line, wire, cable, or instrument without the consent of all parties to the communication.[9] It is also unlawful under the statute to communicate any information so obtained or to aid another in obtaining information via prohibited means.[10]

In 2023, a class action lawsuit was filed against Google on behalf of Verizon customers who alleged that Google “used its Cloud Contact Center AI software as a service to wiretap, eavesdrop on, and record” calls made to Verizon’s customer service center.[11] In the case, District Court Judge Rita F. Lin granted Google’s motion to dismiss on grounds that the relationship between Google and Verizon and the utilization of the Cloud Contact Center AI service fell squarely within the statutory exception to the wiretapping statute.[12] Now, the wiretapping statute does contain an explicit exception for telephone companies and their agents, which is the exception upon which Judge Lin relied; however, that exception is narrowed to such acts that “are for the purpose of construction, maintenance, conduct or operation of the services and facilities of the public utility or telephone company.”[13]

Continue reading

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

PDF Link

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

By Benjamin Clark

Introduction and Overview

Artificial intelligence has captivated the current interest of the general public and academics alike, bringing closer attention to previously unexplored aspects of these algorithms, such as how they have been implemented into critical infrastructure, ways they can be secured through technical defensive measures, and how they can best be regulated to reduce risk of harm. This paper will discuss vulnerabilities common to artificial intelligence systems used in clinical healthcare and how bad actors exploit them before weighing the merits of current regulatory frameworks proposed by the U.S. and other nations for how they address the cybersecurity threats of these systems.

Primarily, artificial intelligence systems used in clinical research and healthcare settings involve either machine learning or deep learning algorithms.[1] Machine learning algorithms automatically learn and improve themselves without needing to be specifically programmed for each intended function. [2] However, these algorithms require that input data be pre-labeled by programmers to train algorithms to associate input features and best predict the labels for output, which involves some degree of human intervention.[3] The presence of humans in this process is referred to as “supervised machine learning” and is most often observed in systems used for diagnostics and medical imaging, in which physicians set markers for specific diagnoses as the labels and algorithms are able to categorize an image as a diagnosis based off the image’s characteristics.[4] Similarly, deep learning is a subset of machine learning characterized by its “neural network” structure in which input data is transmitted through an algorithm through input, output, and “hidden” layers to identify patterns in data.[5] Deep learning algorithms differ from those that utilize machine learning in that they require no human intervention after being trained; instead, deep learning algorithms process unlabeled data by determining what input is most important to create its own labels.[6]

Continue reading

Generative AI Algorithms: The Fine Line Between Speech and Section 230 Immunity

Generative AI Algorithms: The Fine Line Between Speech and Section 230 Immunity

 By Hannah G. Babinski

ABSTRACT

Russian-American writer and philosopher Ayn Rand once observed, “No speech is ever considered, but only the speaker. It’s so much easier to pass judgment on a man than on an idea.”[1] But what if the speaker is not a man, woman, or a human at all? Concepts of speech and identities of speakers have been the focal points of various court cases and debates in recent years. The Supreme Court and various district courts have faced complex and first-of-their-kind questions concerning emerging technologies, namely algorithms and recommendations, and contemplated whether their outputs constitute speech on behalf of an Internet service provider (“Internet platform”) that would not be covered by Section 230 of the Communications Decency Act (“Section 230”).  In this piece, I will examine some of the issues arising from the questions posed by Justice Gorsuch in Gonzalez v. Google, LLC, namely whether generative AI algorithms and their relative outputs constitute speech that is not immunized under Section 230. I will provide an overview of the technology behind generative AI algorithms and then examine the statutory language and interpretation of Section 230, applying that language and interpretive case law to generative AI. Finally, I will provide demonstrative comparisons between generative AI technology and human content creation and foundational Copyright Law concepts to illustrate how generative AI technologies and algorithmic outputs are akin to unique, standalone products that extend beyond the protections of Section 230.

 

Continue Reading 

Life’s Not Fair. Is Life Insurance?

The rapid adoption of artificial intelligence techniques by life insurers poses increased risks of discrimination, and yet, regulators are responding with a potentially unworkable state-by-state patchwork of regulations. Could professional standards provide a faster mechanism for a nationally uniform solution?

By Mark A. Sayre, Class of 2024

Introduction

Among the broad categories of insurance offered in the United States, individual life insurance is unique in a few key respects that make it an attractive candidate for the adoption of artificial intelligence (AI).[1] First, individual life insurance is a voluntary product, meaning that individuals are not required by law to purchase it in any scenario.[2] As a result, in order to attract policyholders, life insurers must convince customers not only to choose their company over other companies but also convince customers to choose their product over other products that might compete for a share of discretionary income (such as the newest gadget or a family vacation). Life insurers can, and do, argue that these competitive pressures provide natural constraints on the industry’s use of practices that the public might view as burdensome, unfair or unethical and that such constraints reduce the need for heavy-handed regulation.[3]

Continue reading