LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy

LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy

Steve Hammerton

 

I. Introduction

There’s a lethal autonomous elephant in the room, and it’s only minimally regulated by DoD Directive 3000.09 (“DODD 3000.09”).[1] Under that directive, lethal autonomous weapon systems (LAWS) are said to be “weapon system[s] that, once activated, can select and engage targets without further intervention by an operator.”[2] In contrast to other nations who have called for an outright ban on such systems, the United States has resisted.[3] Instead, the Department of Defense (“DoD”) has required that LAWS, like all other weapons systems, “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[4] A quick read of this policy would suggest that it requires a human-in-the-loop. However, a more exacting analysis of the language reveals that it only requires “human judgment over the use of force” which only seems to refer to broad themes of lethality like when and where it will be deployed, but not against whom. The directive also refers to an inchoate review process that does not spell out a clear framework for assessing the efficacy and safety of LAWS.[5] Without a clearer statement on the “appropriate levels of human judgment,” the lack of distinction in targeting conflicts with the two core jus in bello principles, distinction and proportionality.[6]

At the same time, LAWS may offer a comparative advantage over human trigger pullers. Canadian think-tank Centre for International Governance Innovation suggests that LAWS “may be able to assess a target’s legitimacy and make decisions faster and with more accuracy and objectivity than fallible human actors could.”[7] Simply put, LAWS could reduce unintended errors or deliberate unlawful killings. Indeed, technology-assisted precision weapons have already reduced collateral damage in armed conflicts.[8] Recent conflicts have been marked by an increased use of autonomous and AI-assisted weaponry, though it is too early to say whether the use of these weapons has identifiably reduced unintended civilian casualties.[9] With the increasing shift to LAWS and other AI-assisted weapons, it seems unrealistic to expect an outright ban. Consequently, the United States and its international partners should seek to preserve distinction and proportionality through a meaningful and complex review, such as a risk-benefit analysis, that recognizes the inherent dangers of using LAWS but appreciates the potential for harm reduction.

Continue reading

The Growing Dependency on AI in Academia

The Growing Dependency on AI in Academia

By: Raaid Bakridi CIPP/US

I. Introduction

In the 21st century, Artificial Intelligence (“AI”) has become an integral part of daily life. From virtual assistants like Siri and Alexa to machine learning algorithms powering recommendation systems,[1] AI is undeniably everywhere;[2] increasingly, it is becoming normalized in daily life.  As U.S. Vice President JD Vance puts it, AI presents an “extraordinary prospect of a new industrial revolution, one on par with the invention of the steam engine.”[3]

AI has also made significant strides in education and academia, offering tools that assist students with research, outlining, essay writing, and even solving complex mathematical and technical problems.[4] However, this convenience comes at a cost. An analysis of AI tutors highlights their potential to enhance education while also raising concerns about overreliance on technology.[5] Rather than using AI as a supplement, many students rely on it to complete their work for them while still receiving credit, which poses challenges to academic integrity and the role of AI in learning.[6] This growing dependence raises concerns about its impact on creativity, critical thinking, overall academic performance, and long-term career prospects. Students are becoming more dependent on AI for their schoolwork, and the potential dangers of this dependency raises significant concerns and implications for their future.[7] If students continue to let AI think for them, the future of our nation will face extreme challenges.

Continue reading

A.I., Facial Recognition, and the New Frontier of Housing Inequality

A.I., Facial Recognition, and the New Frontier of Housing Inequality

By: Caroline Aiello

 

Introduction

“As soon as Ms. Pondexter-Moore steps outside her home, she knows she is being watched.”[1] Schyla Pondexter-Moore is a D.C. resident, and has been living in public housing for over a decade.[2] In 2022, she sued the D.C. Housing Authority for violating her right to privacy, when they forcibly installed advanced surveillance systems in her neighborhood, denied her access to information about the systems, and jailed her overnight while cameras capable of peering into her living room and bedroom were mounted.[3] Schyla is one of over a million public housing residents in the United States.[4] In order to maintain security at these housing complexes, resource-strapped landlords are adopting “landlord tech” to meet their security obligations.[5] Concern for the safety of public housing residents is legitimate and pressing. However, advanced surveillance systems using new features like artificial intelligence are over-surveilling and under-protecting the people they monitor.[6] As jurisdictions in the U.S. and internationally evaluate these systems, key questions emerge about how to balance technological innovation with fundamental principles of respect, dignity, and equity in housing access.

Continue reading

It Will Take A Village To Ensure An Authentic Future For Generation Beta

It Will Take a Village to Ensure an Authentic Future for Generation Beta

By: Susan-Caitlyn Seavey

 

Introduction

One of the many glaring issues that future generations will face is the decline in frequency of in-person human interactions. Today’s technology, especially artificial intelligence (AI) offers unparalleled tools that can be used for the betterment and progression of humanity. For example, new customer service bots, called “conversational agents” are responding to customer inquiries with efficient, personalized and human-like responses, “reshaping how we engage with [ ] companies, [and] creating a world where efficiency meets empathy–or at least an impressively convincing facsimile of it.”[1] AI software is also providing efficiency for individuals through multitasking functions, auto-generated answers to questions, and draft responses to texts and emails, saving the user valuable time. However, this technology can also create unrealistic standards and attractive environments that isolate individuals from their reality. Around the globe, AI technology is becoming more normalized and ubiquitous with software like co-pilot in the workplace and AI robots as companions, friends and romantic partners at home. The rapid development is “particularly concerning given its novelness, the speed and autonomy at which the technology can operate, and the frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed.”[2] We face the challenge of balancing the benefits of efficiency and progression of this technology with the risk of being fully consumed by it, and at the cost of our youngest members of society.

This powerful technology should be used to embrace reality and continue striving for a better world; one that actually exists off of a screen. Jennifer Marsnik summarized this challenge well by contemplating how society can “maintain authenticity, human intelligence and personal connection in a landscape increasingly dominated by algorithms, data and automation.”[3] Young minds are the most susceptible to the unrealistic standards and depictions AI can create. Considering the difficulty even adults can sometimes have when determining whether a visual is real or generated by AI, the young generations with their still-developing minds will evolve in this landscape of not always knowing what is authentic and what is not. If society fails to provide safeguards and implement protections around children and their use of our ever-progressing technology, we could end up with future generations being stuck in a perpetual a cycle of unrealistic expectations and disappointment in the real world, prompting more isolation and leading to the degradation of communities. Preserving authentic relationships and interactions with the real world will require a village: Congress must support new and developing legislation for online safety for children, companies should adopt management frameworks and clearinghouse functions to ensure transparency and accountability in consumer engagement, and parents, teachers, and community leaders must work together to encourage social-emotional learning and in-person interactions in children and teens at home, at school, and in their communities.

Continue reading

Privacy in Death: Conserving your Power in Legacy

Privacy in Death: Conserving your Power in Legacy

Gabriel Siwady-Kattan

 

Introduction

Throughout our lives, we store everything online. This means that not only can a person keep physical assets in a bank; they can also have digital assets available online for access and distribution. Who should be able to access those assets when we die? The IRS defines a digital asset as “a digital representation of value recorded on a cryptographically secure distributed ledger or similar technology” and names as examples convertible virtual currency and cryptocurrency, stablecoins, and Non-Fungible Tokens (NFTs).[1] The IRS further elaborates that “[i]f a particular asset has characteristics of a digital asset, [then] it’s treated as one for federal income tax purposes.”[2] Beyond digital assets that have a financial component to them, however, are also images, videos, digital documents, and electronically-stored music. These could be held by any person, and in our modern age, most people have an account where their digital information is stored, whether in an Apple, Google, Facebook, or Instagram account. The existence of digital assets has brought many issues, including how to deal with the distribution of digital assets at the time of death.

To deal with this issue, the Uniform Law Commission (ULC) drafted the Uniform Fiduciary Access to Digital Access Act (hereinafter referred to as the Digital Assets Act).[3] This Act essentially treated digital assets as it would any other kind of traditional property a person held at the time of their death.[4] This meant that an executor had near unsupervised power to access, manage, and distribute a decedent’s digital assets.[5] Under the Digital Assets Act, an executor had the same access to digital assets as an owner had at the time of their death.[6]

Naturally, this “open-access approach” could raise personal privacy concerns. What if, in the process of getting a decedent’s affairs in order, an executor came across communications with a third party? What if that communication shed light on an unknown aspect of the deceased’s life? What if that communication was meant to remain confidential? And what about that third party’s identity?

On top of these personal privacy concerns, the Digital Assets Act’s provisions were contrary to some tech companies’ terms of use agreements. For example, tech companies have their own ways of managing the content on their platform, and often control or limit the agency a user or consumer might have over their own communications. To this end, tech companies almost always require users to agree to a terms of use agreement, which typically includes provisions on how and to whom data may be shared.

Continue reading

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Emily Burns

 

Introduction 

The Freedom of Information Act (“FOIA”) is the principal mechanism that allows people to request records held by agencies within the Federal government.[1] In the immigration context, a very common type of FOIA record request is for an A-file, which is a record of every interaction between a non-citizen and an immigration related federal agency.[2]

For people in immigration proceedings, obtaining an A-File allows noncitizens and their lawyers to access information crucial to defending against deportation or gaining immigration benefits, such as entry and exit dates from the United States, copies of past applications submitted to Federal agencies, or statements made to U.S. officials.[3] To obtain an A-File, non-citizens must affirmatively request the file through FOIA from an agency such as United States Citizenship and Immigration Services (USCIS) or Immigration and Customs Enforcement (ICE).[4] However, one carve-out to this process exists, available only in the Ninth Circuit: Dent motions.[5] Dent motions exist due to the case of Dent v. Holder, where the Ninth Circuit recognized that the government violated Sazar Dent’s right to Due Process when it required Mr. Dent to request his A-File through FOIA rather than summarily handing the file over to him when requested in a prior court proceeding.[6]

Continue reading

Honesty is the Best (Privacy) Policy: The Importance of Transparency in Disclosing Data Collection for AI Training

Honesty is the Best (Privacy) Policy: The Importance of Transparency in Disclosing Data Collection for AI Training

Alexandra Logan

 

Introduction

This past July, the Federal Trade Commission (“FTC”), Department of Justice, and a number of international antitrust enforcers issued a Joint Statement on Competition in Generative AI Foundation Models and AI Products. The Joint Statement details that “[f]irms that deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy . . . it is important that consumers are informed . . . about when and how an AI application is employed in the products and services they purchase or use.” Alleged unfair and deceptive acts or practices (“UDAP”) can be investigated by the FTC via Section 5 of the FTC Act.[2] Consumers are looking for more ways to limit the ability of companies to collect and use their data for AI training purposes,[3] and companies should be vigilant in ensuring their privacy policies are up to date and thorough. If companies can keep their privacy policies up-to-date, this can help them to avoid making deceptive or misrepresentative claims about the data that they collect or what they do with it. Recently, X and LinkedIn have come under fire by consumers because of the companies’ data collection practices, and their ambiguous representations and omissions about how they use consumer data.

Continue reading

Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability

Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability

Aysha Vear

 

In an effort to better understand the data collection and use practices of major social media and video streaming services (SMVSSs), the Federal Trade Commission issued orders to file Special Reports under Section 6(b) of the FTC Act[1] to nine companies in 2020.[2] The orders sought to understand how the companies collect, track, and use their consumers’ personal and demographic information; how they handle advertising and targeted advertising; whether they apply algorithms, data analytics, and artificial intelligence (AI) to consumer information; and how their practices impact children and teens.[3] Titled, “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services,” the 2024 report has been four years in the making and a key but unsurprising finding was that the business model of targeted advertising was the catalyst for extensive data gathering and harmful behaviors, and companies failed to protect users, particularly children and teens.[4]

Data Practices and User Rights
Companies involved in the FTC report collected a large amount of data about consumers’ activity on their platforms and also gleaned information about consumers’ activity off of the platforms which exceeded user expectations.[5] The Commission found that a massive amount of data was collected or inferred about users including demographic information, user metrics, or data about their interaction with the network.[6] With respect to specific privacy settings, many companies did not collect any information at all about user changes or updates to their privacy settings on the SMVSSs.[7]

The information came from many places as well. Some information on users collected by the companies was directly input by the SMVSS user themselves when creating a profile; passively gathered from information on or through engagement with the SMVSS; culled from other services provided by company affiliates or other platforms; inferred from algorithms, data analytics, and AI; or from advertising trackers, advertisers, and data brokers. Data collected was used for many different purposes including for targeted advertising, AI, business purposes like optimization and research and development, to enhance and analyze user engagement, and to infer or deduce other information about the user.[8] In addition, most companies deliberately tracked consumer shopping behaviors and interests.[9] Little transparency, if any, was provided on the targeting, optimization, and analysis of user data.

Continue reading

Anderson v. TikTok: a New Challenge to § 230 Immunity

Anderson v. TikTok: a New Challenge to § 230 Immunity

John Blegen

 

In August 2024, the 3rd Circuit overturned a Pennsylvania District court’s decision to grant summary judgment to TikTok, quashing a suit brought by Tawainna Anderson.[1] Anderson sued on behalf of her deceased daughter Nylah, alleging products liability, negligence, and wrongful death claims after the ten-year-old died of self-asphyxiation after watching numerous videos TikTok routed to her for-you page.[2] The videos, created by third-parties and then uploaded to TikTok, encouraged users to choke themselves with “belts, purse strings, or anything similar,” as part of a viral “blackout challenge.”[3] Nylah’s mother found her daughter asphyxiated in the back of a closet after the ten-year-old had tried to recreate one such video.[4]

The District Court for the Eastern District of Pennsylvania originally dismissed Anderson’s complaint on grounds that TikTok was shielded from liability for content created by third parties under § 230 of the Communications Decency Act.[5] But on appeal, the 3rd Circuit rejected this claim, holding that while § 230 may protect social media platforms such as TikTok from suit for content provided by third party users, in this case, it was TikTok’s own algorithm that was the subject of the lawsuit.[6] This follows a recent Supreme Court decision, Moody v. NetChoice, which held that the algorithms of social-media platforms may themselves be “expressive product” protected under the 1st amendment, and therefore, subject to greater legal scrutiny.[7] In the court’s words: “Because the information that forms the basis of Anderson’s lawsuit – TikTok’s recommendations via its FYP algorithm – is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”[8]

Since the August ruling, commentators have noted how impactful this case could be for internet content regulation and the social-media industry at large.[9] David French, a legal scholar employed at the New York Times, wrote, “Nylah’s case could turn out to be one of the most significant in the history of the internet.”[10] Leah Plunket, another legal scholar, speaking specifically on the impact this ruling will have on companies’ legal counsel: “My best guess is that every platform that uses a recommendation algorithm that could plausibly count as expressive activity . . . woke up in their general counsel’s office and said, ‘Holy Moly.’”[11]

Continue reading

Privacy Needs Security, Security Needs Privacy

Privacy Needs Security, Security Needs Privacy 

William O’Reilly

 

     I.         Introduction

Security Operations Centers (SOC) for enterprises across the country are in need of professionals. They need professionals to fill the roles that already exist, and they need to add roles to deal with the changing regulatory landscape. For an enterprise, the best practice is an investment in “people, process, and technology.[1] It is true that people are the most expensive part of an SOC.[2] However, the reason there is a shortage is not because enterprises around the US are skimping on their labor. There simply are not enough trained professionals. The training to be a cybersecurity professional is not easy, nor is it cheap. Enterprises are in danger from their absence of professionals, and it may be worth it for them to shoulder the cost of education and certification in pursuit of their goal of self-preservation. One cost the enterprise will have to face in hiring professionals is the establishment of career potential and pay There is also an ongoing cost for organizations that need to have instances of training to level up their employees over time.[4] Training also assists with retention of personnel, making it a necessary cost to the enterprise.[5] Finally, burgeoning privacy laws create burdens and liabilities that the SOC in its present form is only partially equipped to deal with. Fortunately, over 20% percent of enterprises plan to increase their investment in cybersecurity post breach.[6] That investment should include privacy professionals.

Potential employees have costs associated with education and skill development. The cost of training, education, and certifications can be a limit on professionals entering the cybersecurity industry. No SOC will have the same composition or volume, but most SOC services demand certain roles be filled by professionals with specific training. Legislation is also demanding those roles be filled.[7] Each of these professions has specific responsibilities, which require specific skills, and each of those skills can be represented through certifications.[8] Each of these certifications has a cost. Laying out this cost may illustrate one reason for the dearth in skilled professionals and may show an enterprise the value that a professional expects to get out of their investment.

Continue reading