Spoiled for Choice: AI Regulation Possibilities

Spoiled for Choice: AI Regulation Possibilities

William O’Reilly

 

I. Introduction

Americans want innovation and they believe advancing AI benefits everyone.[1] One solution to encourage this is to roll back regulations.[2] Unfortunately, part and parcel with the innovations are several harms that are likely to result from the inappropriate use of personal and proprietary data and AI decision-making.[3]  There is an option to ignore this potential harm and halt regulations to encourage the spread of personal information.[4] This option is not in the best interest of the country because the U.S. is already losing the innovation race in some respects. Innovation can still occur despite heavy regulations. Virginia is the latest state to pursue the “no regulation” strategy, and it provides a good microcosm to highlight the challenges and advantages of this approach.[5] Virginia’s absence of regulation falls on a spectrum of legislation that demonstrates options for states to protect rights and innovation. As this article discusses further, curbing AI regulation on companies will not advance innovation enough to justify the civil rights violations perpetuated by current AI use.

Continue reading

Privacy and Free Speech in the Age of the Ever-Present Border

Privacy and Free Speech in the Age of the Ever-Present Border

Viv Daniel

 

I. Introduction and Legal Background

On his first day in office, President Trump signed Executive Order 1461 (EO 1461), titled “Protecting the United States from Foreign Terrorists and Other National Security and Public Safety Threats.”[1] The Order, as the name might suggest, directs executive agencies to coordinate to enhance screening for foreign nationals coming to, or living within, the United States.[2] The Order instructs these agencies to ensure that non-citizens “are vetted and screened to the maximum degree possible.”[3]

To enforce the provisions of the Order, U.S. Citizenship and Immigration Services (USCIS) has put forward a proposed rule, with comments open until May 5th, to require non-citizens to disclose all of their social media usernames when filling out forms to access immigration benefits.[4] USCIS says it will then use this information to enhance identity verification, vet and screen for national security, and conduct generalized immigration inspections under its purview.[5]

This is not the first time something like this has happened. In 2019 under the previous Trump administration, Visa applicants were required to register all recent social media accounts with the government as part of the application,[6] a rule which was upheld when a District Judge for the District of Columbia dismissed a case challenging it.[7]

President Trump vests EO 1461 in his executive authority under the Immigration and Nationality Act (INA).[8] The Act, passed in 1952, was heavily amended in 1996 by the Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA) to retroactively make harsher the immigration consequences of certain conduct.[9] Although terrorism as-such was not implicated in the act, the update to the INA was partially motivated by a need to respond to the 1993 World Trade Center Bombings, and violent and conspiratorial conduct which could constitute terrorism was covered by the act.[10]

Although IIRIRA drastically expanded the number of deportable immigrants in the U.S. overnight, subjecting many non-citizens to removal proceedings over minor infractions committed decades ago,[11] the act did not go so far as to explicitly punish noncitizens for their free speech.[12] The executive authority now claimed under the Act to monitor social media, however, aligns with a troubling trend which may change this norm.

Continue reading

LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy

LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy

Steve Hammerton

 

I. Introduction

There’s a lethal autonomous elephant in the room, and it’s only minimally regulated by DoD Directive 3000.09 (“DODD 3000.09”).[1] Under that directive, lethal autonomous weapon systems (LAWS) are said to be “weapon system[s] that, once activated, can select and engage targets without further intervention by an operator.”[2] In contrast to other nations who have called for an outright ban on such systems, the United States has resisted.[3] Instead, the Department of Defense (“DoD”) has required that LAWS, like all other weapons systems, “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[4] A quick read of this policy would suggest that it requires a human-in-the-loop. However, a more exacting analysis of the language reveals that it only requires “human judgment over the use of force” which only seems to refer to broad themes of lethality like when and where it will be deployed, but not against whom. The directive also refers to an inchoate review process that does not spell out a clear framework for assessing the efficacy and safety of LAWS.[5] Without a clearer statement on the “appropriate levels of human judgment,” the lack of distinction in targeting conflicts with the two core jus in bello principles, distinction and proportionality.[6]

At the same time, LAWS may offer a comparative advantage over human trigger pullers. Canadian think-tank Centre for International Governance Innovation suggests that LAWS “may be able to assess a target’s legitimacy and make decisions faster and with more accuracy and objectivity than fallible human actors could.”[7] Simply put, LAWS could reduce unintended errors or deliberate unlawful killings. Indeed, technology-assisted precision weapons have already reduced collateral damage in armed conflicts.[8] Recent conflicts have been marked by an increased use of autonomous and AI-assisted weaponry, though it is too early to say whether the use of these weapons has identifiably reduced unintended civilian casualties.[9] With the increasing shift to LAWS and other AI-assisted weapons, it seems unrealistic to expect an outright ban. Consequently, the United States and its international partners should seek to preserve distinction and proportionality through a meaningful and complex review, such as a risk-benefit analysis, that recognizes the inherent dangers of using LAWS but appreciates the potential for harm reduction.

Continue reading

The Growing Dependency on AI in Academia

The Growing Dependency on AI in Academia

By: Raaid Bakridi CIPP/US

I. Introduction

In the 21st century, Artificial Intelligence (“AI”) has become an integral part of daily life. From virtual assistants like Siri and Alexa to machine learning algorithms powering recommendation systems,[1] AI is undeniably everywhere;[2] increasingly, it is becoming normalized in daily life.  As U.S. Vice President JD Vance puts it, AI presents an “extraordinary prospect of a new industrial revolution, one on par with the invention of the steam engine.”[3]

AI has also made significant strides in education and academia, offering tools that assist students with research, outlining, essay writing, and even solving complex mathematical and technical problems.[4] However, this convenience comes at a cost. An analysis of AI tutors highlights their potential to enhance education while also raising concerns about overreliance on technology.[5] Rather than using AI as a supplement, many students rely on it to complete their work for them while still receiving credit, which poses challenges to academic integrity and the role of AI in learning.[6] This growing dependence raises concerns about its impact on creativity, critical thinking, overall academic performance, and long-term career prospects. Students are becoming more dependent on AI for their schoolwork, and the potential dangers of this dependency raises significant concerns and implications for their future.[7] If students continue to let AI think for them, the future of our nation will face extreme challenges.

Continue reading

A.I., Facial Recognition, and the New Frontier of Housing Inequality

A.I., Facial Recognition, and the New Frontier of Housing Inequality

By: Caroline Aiello

 

Introduction

“As soon as Ms. Pondexter-Moore steps outside her home, she knows she is being watched.”[1] Schyla Pondexter-Moore is a D.C. resident, and has been living in public housing for over a decade.[2] In 2022, she sued the D.C. Housing Authority for violating her right to privacy, when they forcibly installed advanced surveillance systems in her neighborhood, denied her access to information about the systems, and jailed her overnight while cameras capable of peering into her living room and bedroom were mounted.[3] Schyla is one of over a million public housing residents in the United States.[4] In order to maintain security at these housing complexes, resource-strapped landlords are adopting “landlord tech” to meet their security obligations.[5] Concern for the safety of public housing residents is legitimate and pressing. However, advanced surveillance systems using new features like artificial intelligence are over-surveilling and under-protecting the people they monitor.[6] As jurisdictions in the U.S. and internationally evaluate these systems, key questions emerge about how to balance technological innovation with fundamental principles of respect, dignity, and equity in housing access.

Continue reading

It Will Take A Village To Ensure An Authentic Future For Generation Beta

It Will Take a Village to Ensure an Authentic Future for Generation Beta

By: Susan-Caitlyn Seavey

 

Introduction

One of the many glaring issues that future generations will face is the decline in frequency of in-person human interactions. Today’s technology, especially artificial intelligence (AI) offers unparalleled tools that can be used for the betterment and progression of humanity. For example, new customer service bots, called “conversational agents” are responding to customer inquiries with efficient, personalized and human-like responses, “reshaping how we engage with [ ] companies, [and] creating a world where efficiency meets empathy–or at least an impressively convincing facsimile of it.”[1] AI software is also providing efficiency for individuals through multitasking functions, auto-generated answers to questions, and draft responses to texts and emails, saving the user valuable time. However, this technology can also create unrealistic standards and attractive environments that isolate individuals from their reality. Around the globe, AI technology is becoming more normalized and ubiquitous with software like co-pilot in the workplace and AI robots as companions, friends and romantic partners at home. The rapid development is “particularly concerning given its novelness, the speed and autonomy at which the technology can operate, and the frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed.”[2] We face the challenge of balancing the benefits of efficiency and progression of this technology with the risk of being fully consumed by it, and at the cost of our youngest members of society.

This powerful technology should be used to embrace reality and continue striving for a better world; one that actually exists off of a screen. Jennifer Marsnik summarized this challenge well by contemplating how society can “maintain authenticity, human intelligence and personal connection in a landscape increasingly dominated by algorithms, data and automation.”[3] Young minds are the most susceptible to the unrealistic standards and depictions AI can create. Considering the difficulty even adults can sometimes have when determining whether a visual is real or generated by AI, the young generations with their still-developing minds will evolve in this landscape of not always knowing what is authentic and what is not. If society fails to provide safeguards and implement protections around children and their use of our ever-progressing technology, we could end up with future generations being stuck in a perpetual a cycle of unrealistic expectations and disappointment in the real world, prompting more isolation and leading to the degradation of communities. Preserving authentic relationships and interactions with the real world will require a village: Congress must support new and developing legislation for online safety for children, companies should adopt management frameworks and clearinghouse functions to ensure transparency and accountability in consumer engagement, and parents, teachers, and community leaders must work together to encourage social-emotional learning and in-person interactions in children and teens at home, at school, and in their communities.

Continue reading

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Emily Burns

 

Introduction 

The Freedom of Information Act (“FOIA”) is the principal mechanism that allows people to request records held by agencies within the Federal government.[1] In the immigration context, a very common type of FOIA record request is for an A-file, which is a record of every interaction between a non-citizen and an immigration related federal agency.[2]

For people in immigration proceedings, obtaining an A-File allows noncitizens and their lawyers to access information crucial to defending against deportation or gaining immigration benefits, such as entry and exit dates from the United States, copies of past applications submitted to Federal agencies, or statements made to U.S. officials.[3] To obtain an A-File, non-citizens must affirmatively request the file through FOIA from an agency such as United States Citizenship and Immigration Services (USCIS) or Immigration and Customs Enforcement (ICE).[4] However, one carve-out to this process exists, available only in the Ninth Circuit: Dent motions.[5] Dent motions exist due to the case of Dent v. Holder, where the Ninth Circuit recognized that the government violated Sazar Dent’s right to Due Process when it required Mr. Dent to request his A-File through FOIA rather than summarily handing the file over to him when requested in a prior court proceeding.[6]

Continue reading

Honesty is the Best (Privacy) Policy: The Importance of Transparency in Disclosing Data Collection for AI Training

Honesty is the Best (Privacy) Policy: The Importance of Transparency in Disclosing Data Collection for AI Training

Alexandra Logan

 

Introduction

This past July, the Federal Trade Commission (“FTC”), Department of Justice, and a number of international antitrust enforcers issued a Joint Statement on Competition in Generative AI Foundation Models and AI Products. The Joint Statement details that “[f]irms that deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy . . . it is important that consumers are informed . . . about when and how an AI application is employed in the products and services they purchase or use.” Alleged unfair and deceptive acts or practices (“UDAP”) can be investigated by the FTC via Section 5 of the FTC Act.[2] Consumers are looking for more ways to limit the ability of companies to collect and use their data for AI training purposes,[3] and companies should be vigilant in ensuring their privacy policies are up to date and thorough. If companies can keep their privacy policies up-to-date, this can help them to avoid making deceptive or misrepresentative claims about the data that they collect or what they do with it. Recently, X and LinkedIn have come under fire by consumers because of the companies’ data collection practices, and their ambiguous representations and omissions about how they use consumer data.

Continue reading

Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability

Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability

Aysha Vear

 

In an effort to better understand the data collection and use practices of major social media and video streaming services (SMVSSs), the Federal Trade Commission issued orders to file Special Reports under Section 6(b) of the FTC Act[1] to nine companies in 2020.[2] The orders sought to understand how the companies collect, track, and use their consumers’ personal and demographic information; how they handle advertising and targeted advertising; whether they apply algorithms, data analytics, and artificial intelligence (AI) to consumer information; and how their practices impact children and teens.[3] Titled, “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services,” the 2024 report has been four years in the making and a key but unsurprising finding was that the business model of targeted advertising was the catalyst for extensive data gathering and harmful behaviors, and companies failed to protect users, particularly children and teens.[4]

Data Practices and User Rights
Companies involved in the FTC report collected a large amount of data about consumers’ activity on their platforms and also gleaned information about consumers’ activity off of the platforms which exceeded user expectations.[5] The Commission found that a massive amount of data was collected or inferred about users including demographic information, user metrics, or data about their interaction with the network.[6] With respect to specific privacy settings, many companies did not collect any information at all about user changes or updates to their privacy settings on the SMVSSs.[7]

The information came from many places as well. Some information on users collected by the companies was directly input by the SMVSS user themselves when creating a profile; passively gathered from information on or through engagement with the SMVSS; culled from other services provided by company affiliates or other platforms; inferred from algorithms, data analytics, and AI; or from advertising trackers, advertisers, and data brokers. Data collected was used for many different purposes including for targeted advertising, AI, business purposes like optimization and research and development, to enhance and analyze user engagement, and to infer or deduce other information about the user.[8] In addition, most companies deliberately tracked consumer shopping behaviors and interests.[9] Little transparency, if any, was provided on the targeting, optimization, and analysis of user data.

Continue reading

Anderson v. TikTok: a New Challenge to § 230 Immunity

Anderson v. TikTok: a New Challenge to § 230 Immunity

John Blegen

 

In August 2024, the 3rd Circuit overturned a Pennsylvania District court’s decision to grant summary judgment to TikTok, quashing a suit brought by Tawainna Anderson.[1] Anderson sued on behalf of her deceased daughter Nylah, alleging products liability, negligence, and wrongful death claims after the ten-year-old died of self-asphyxiation after watching numerous videos TikTok routed to her for-you page.[2] The videos, created by third-parties and then uploaded to TikTok, encouraged users to choke themselves with “belts, purse strings, or anything similar,” as part of a viral “blackout challenge.”[3] Nylah’s mother found her daughter asphyxiated in the back of a closet after the ten-year-old had tried to recreate one such video.[4]

The District Court for the Eastern District of Pennsylvania originally dismissed Anderson’s complaint on grounds that TikTok was shielded from liability for content created by third parties under § 230 of the Communications Decency Act.[5] But on appeal, the 3rd Circuit rejected this claim, holding that while § 230 may protect social media platforms such as TikTok from suit for content provided by third party users, in this case, it was TikTok’s own algorithm that was the subject of the lawsuit.[6] This follows a recent Supreme Court decision, Moody v. NetChoice, which held that the algorithms of social-media platforms may themselves be “expressive product” protected under the 1st amendment, and therefore, subject to greater legal scrutiny.[7] In the court’s words: “Because the information that forms the basis of Anderson’s lawsuit – TikTok’s recommendations via its FYP algorithm – is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”[8]

Since the August ruling, commentators have noted how impactful this case could be for internet content regulation and the social-media industry at large.[9] David French, a legal scholar employed at the New York Times, wrote, “Nylah’s case could turn out to be one of the most significant in the history of the internet.”[10] Leah Plunket, another legal scholar, speaking specifically on the impact this ruling will have on companies’ legal counsel: “My best guess is that every platform that uses a recommendation algorithm that could plausibly count as expressive activity . . . woke up in their general counsel’s office and said, ‘Holy Moly.’”[11]

Continue reading