The Privacy Parlay: How Data Mining and Targeted Ads Drive Gambling Addiction

The Privacy Parlay: How Data Mining and Targeted Ads Drive Gambling Addiction

Emily Weisser

 

I. Introduction

In the digital age, the gambler is not just the person placing the bets, they are also the data being wagered on. Every click, swipe, and deposit becomes part of a high-stakes game where the house rarely loses. Much like a parlay bet–where every leg must hit for the gambler to win–the modern gambling industry relies on data collection and targeted advertising to increase the number of returning customers, boosting its own profits while building a predictive framework that treats users as inputs rather than individuals. In this “privacy parlay,” the odds are overwhelmingly in favor of the house– the gambling operator.

The first leg of this parlay is the mining of consumer data, drawn from government-mandated identity verification information and voluntary interactions. Operators combine this data to build comprehensive behavioral profiles. The second leg involves monetizing this data through micro-targeted advertising, designed to exploit psychological vulnerabilities and nudge users toward repeated engagement. The third leg uses these insights to promote repeat play, conflating addiction with ordinary customer loyalty.

Despite the immense power of this system, the current regulatory landscape offers fragmented, inconsistent protection for consumers, leaving critical gaps in oversight. This essay explores data-driven gambling in the post-Professional and Amateur Sports Protection Act (“PASPA”) era and discusses the argument that a unified federal framework is necessary to regulate the privacy parlay–ensuring that data-driven gambling operates transparently, ethically, and in a manner that protects consumers from exploitation.

Continue reading

“Don’t Reinvent the Wheel, Just Realign It.” How Lessons from the Belmont Report Can Help Govern the Use of AI in Research

“Don’t Reinvent the Wheel, Just Realign It.”Just Realign It.”[1] How Lessons from the Belmont Report Can Help Govern the Use of AI in Research How Lessons from the Belmont Report Can Help Govern the Use of AI in Research

Steven Hammerton

 

Background

Artificial intelligence (AI) is becoming increasingly integrated into many areas of life, including research. However, legislation and regulation lag. Years into the widespread adoption of AI and the United States is still without meaningful guardrails to address the ethical quandaries that stem from the use of AI. Until there is comprehensive legislation, the burden of ensuring ethical training, development, and usage of AI will be on the developers, deployers, and users of AI, such as researchers and research participants. This article will explore three different ethical issues associated with AI and how principles from the Belmont Report can guide researchers and other users of AI in their pursuit of ethical AI.

Continue reading

AI Tracking in Small Town Maine?: Real Life Optimization and Our Expectation of Privacy

AI Tracking in Small Town Maine?: Real Life Optimization and Our Expectation of Privacy

Viv Daniel

 

I. Introduction

Increasingly, the intangible world of the internet has been likened to physical space – the concept of the “digital town square,” the term “online space,” and the short-lived promise of the metaverse all come to mind – but recent developments beg the question: Are our physical spaces starting to resemble digital life?

This year, Old Town, Maine became the latest Bangor-area community to sign up for Placer.ai’s services through the Greater Bangor Recreation Economy for Rural Communities group, which is part of Eastern Maine Development Corporation.[1] The AI service collects location data from the smartphones of people moving in and out of these communities, alongside information about where these phones were immediately before and after moving through the monitored area.[2] The AI also collects personal data about the smartphone’s owner, including income level and other demographic information.[3]

In 2025, many Americans might expect that their movements from site-to-site online are being tracked, and their data collected along the way. In their real physical lives, even, most Americans put up with a certain degree of tracking and data collection in the form of surveillance cameras, cell-site location information (CSLI), and the like.[4] Still, many people would likely be surprised to find that their local government (or that of their vacation destination) had contracted with a private company to track their movements and income. So, why would a city or town sign up for such a tracking program?

Continue reading

Put the Katz Back in the Bag: Restoring Privacy Rights in the Digital Age

Put the Katz Back in the Bag: Restoring Privacy Rights in the Digital Age

Tommy Scherrer

 

The word “privacy” appears nowhere in the Constitution, yet the Supreme Court has recognized that a constitutional right to privacy emerges from certain “penumbras, formed by emanations” of guarantees in the Bill of Rights.[1] Of these guarantees, that of the Fourth Amendment provides the clearest architecture for a right to privacy by recognizing the individual citizen’s dominion over their “persons, houses, papers, and effects,” and requiring the government to justify any intrusion.[2] This article argues for a restoration of the American privacy regime to this original foundation: enforceable boundaries that empower individuals to control access to their lives.

I. Introduction

The Court complicated the foundations of American privacy rights in Katz v. United States when it reimagined privacy rights as a matter of “reasonable expectations.”[3] That formulation was intended to liberalize the Fourth Amendment and extend its protections beyond physical trespass. However, by grounding privacy rights in what a small group of lawyers believe society recognizes as “reasonable,” the Court detached protection from the concrete boundaries of the Constitution and created an ambiguous standard. As we journey further into the 21st century, and state and private surveillance become normalized as necessary to a secure society, our general expectation of privacy is shrinking rapidly, and our rights are shrinking with it.

The text of the Constitution protects citizens through their persons, homes, papers, and effects—real places and things that anchor enforceable boundaries. Katz inverted that logic by replacing hardline rules with shifting baselines and mistaking trust for consent to surveillance. In the decades that followed, this logic hardened into the third-party doctrine, which holds that any information shared with others loses constitutional protection.[4] The consequences of this doctrine are especially harsh in today’s world, when nearly all personal information flows through third parties. If privacy rights are to remain a foundation of democratic life, they need to be grounded in some sort of enforceable boundary. Because today’s data and the inferences drawn from it can reach further into private life than any physical trespass, the protections of the Fourth Amendment must be interpreted with that reality in mind.

Continue reading

Data Sovereignty in the Age of Digital Nationalism: The Case of TikTok and the Global Fragmentation of the Internet

Data Sovereignty in the Age of Digital Nationalism: The Case of TikTok and the Global Fragmentation of the Internet

Aysha Vear

 

I. Introduction

Social media has significantly changed the ways in which individuals both receive information and exchange it. As these applications and platforms have increasingly become part of the everyday lives of citizens and further incorporated into their daily interactions, the issue of social media regulation has been a clear focal point of legal and political discourse. Today there exists a growing concern about American citizens’ data with respect to Chinese influence and intrusion. Consequently, the House of Representatives presented a bill in 2024 to mitigate these fears. H.R. 7521 would force the foreign ownership of TikTok, a social media platform controlled by Chinese parent company ByteDance, to divest or face a broad federal ban.[1]

TikTok is centered on short videos created and uploaded by users who are able to create, share and interact with networks of content,[2] and it has quickly become one of the most popular apps in the United States.[3]  It is “a mass marketplace of trends and ideas and has become a popular news source for young people”[4] with sixty-two percent of eighteen to twenty-nine year olds saying that they use the app[5] which reached a billion users in 2021.[6]  The app got its start in the U.S. as an app called “Musical.ly” but was acquired by the Chinese company ByteDance in 2018 and rebranded as TikTok.[7] ByteDance is headquartered in Beijing and it launched “Douyin,” the Chinese TikTok equivalent in 2016 prior to the “Musical.ly” acquisition. It is this affiliation with China and the Chinese app that flagged concern for United States government officials and this case represents a growing trend of national governments asserting greater control over digital platforms and the content which citizens consume.

This highlights a growing trend toward countries treating data governance as a national security issue. Data sovereignty is a concept that refers to “a state’s sovereign power to regulate not only cross-border flow of data through uses of internet filtering technologies and data localization mandates, but also speech activities . . . and access to technologies.”[8] Governments are introducing laws to prevent foreign control over citizen data, such as China’s Data Security Law and India’s restriction on data localization. Given that these laws have different aims and approaches to governance as well as shifting priorities, they have increased geopolitical competition between the U.S., China, and the EU. While data sovereignty is a necessary framework for global internet governance, its implementation must balance security concerns with the need to prevent a fragmentation of the internet as we know it. More countries are scrambling to control the flow of data in and out of their national borders and, as such, “the rise in data localization policies has been a contributing factor in declining internet freedom.”[9] This paper will explore the different approaches of the United States, China, and the European Union in controlling cross-border data flows. Next, looking through a specific lens at the TikTok forced divestiture and attacks on other Chinese entities, it will explore the growing trend of data sovereignty and attempt to find the balance in national security and digital openness. Finally, the paper will suggest possible solutions for the growing need for better collaboration in the digital sphere.

Continue reading

Spoiled for Choice: AI Regulation Possibilities

Spoiled for Choice: AI Regulation Possibilities

William O’Reilly

 

I. Introduction

Americans want innovation and they believe advancing AI benefits everyone.[1] One solution to encourage this is to roll back regulations.[2] Unfortunately, part and parcel with the innovations are several harms that are likely to result from the inappropriate use of personal and proprietary data and AI decision-making.[3]  There is an option to ignore this potential harm and halt regulations to encourage the spread of personal information.[4] This option is not in the best interest of the country because the U.S. is already losing the innovation race in some respects. Innovation can still occur despite heavy regulations. Virginia is the latest state to pursue the “no regulation” strategy, and it provides a good microcosm to highlight the challenges and advantages of this approach.[5] Virginia’s absence of regulation falls on a spectrum of legislation that demonstrates options for states to protect rights and innovation. As this article discusses further, curbing AI regulation on companies will not advance innovation enough to justify the civil rights violations perpetuated by current AI use.

Continue reading

Privacy and Free Speech in the Age of the Ever-Present Border

Privacy and Free Speech in the Age of the Ever-Present Border

Viv Daniel

 

I. Introduction and Legal Background

On his first day in office, President Trump signed Executive Order 1461 (EO 1461), titled “Protecting the United States from Foreign Terrorists and Other National Security and Public Safety Threats.”[1] The Order, as the name might suggest, directs executive agencies to coordinate to enhance screening for foreign nationals coming to, or living within, the United States.[2] The Order instructs these agencies to ensure that non-citizens “are vetted and screened to the maximum degree possible.”[3]

To enforce the provisions of the Order, U.S. Citizenship and Immigration Services (USCIS) has put forward a proposed rule, with comments open until May 5th, to require non-citizens to disclose all of their social media usernames when filling out forms to access immigration benefits.[4] USCIS says it will then use this information to enhance identity verification, vet and screen for national security, and conduct generalized immigration inspections under its purview.[5]

This is not the first time something like this has happened. In 2019 under the previous Trump administration, Visa applicants were required to register all recent social media accounts with the government as part of the application,[6] a rule which was upheld when a District Judge for the District of Columbia dismissed a case challenging it.[7]

President Trump vests EO 1461 in his executive authority under the Immigration and Nationality Act (INA).[8] The Act, passed in 1952, was heavily amended in 1996 by the Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA) to retroactively make harsher the immigration consequences of certain conduct.[9] Although terrorism as-such was not implicated in the act, the update to the INA was partially motivated by a need to respond to the 1993 World Trade Center Bombings, and violent and conspiratorial conduct which could constitute terrorism was covered by the act.[10]

Although IIRIRA drastically expanded the number of deportable immigrants in the U.S. overnight, subjecting many non-citizens to removal proceedings over minor infractions committed decades ago,[11] the act did not go so far as to explicitly punish noncitizens for their free speech.[12] The executive authority now claimed under the Act to monitor social media, however, aligns with a troubling trend which may change this norm.

Continue reading

LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy

LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy

Steve Hammerton

 

I. Introduction

There’s a lethal autonomous elephant in the room, and it’s only minimally regulated by DoD Directive 3000.09 (“DODD 3000.09”).[1] Under that directive, lethal autonomous weapon systems (LAWS) are said to be “weapon system[s] that, once activated, can select and engage targets without further intervention by an operator.”[2] In contrast to other nations who have called for an outright ban on such systems, the United States has resisted.[3] Instead, the Department of Defense (“DoD”) has required that LAWS, like all other weapons systems, “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[4] A quick read of this policy would suggest that it requires a human-in-the-loop. However, a more exacting analysis of the language reveals that it only requires “human judgment over the use of force” which only seems to refer to broad themes of lethality like when and where it will be deployed, but not against whom. The directive also refers to an inchoate review process that does not spell out a clear framework for assessing the efficacy and safety of LAWS.[5] Without a clearer statement on the “appropriate levels of human judgment,” the lack of distinction in targeting conflicts with the two core jus in bello principles, distinction and proportionality.[6]

At the same time, LAWS may offer a comparative advantage over human trigger pullers. Canadian think-tank Centre for International Governance Innovation suggests that LAWS “may be able to assess a target’s legitimacy and make decisions faster and with more accuracy and objectivity than fallible human actors could.”[7] Simply put, LAWS could reduce unintended errors or deliberate unlawful killings. Indeed, technology-assisted precision weapons have already reduced collateral damage in armed conflicts.[8] Recent conflicts have been marked by an increased use of autonomous and AI-assisted weaponry, though it is too early to say whether the use of these weapons has identifiably reduced unintended civilian casualties.[9] With the increasing shift to LAWS and other AI-assisted weapons, it seems unrealistic to expect an outright ban. Consequently, the United States and its international partners should seek to preserve distinction and proportionality through a meaningful and complex review, such as a risk-benefit analysis, that recognizes the inherent dangers of using LAWS but appreciates the potential for harm reduction.

Continue reading

The Growing Dependency on AI in Academia

The Growing Dependency on AI in Academia

By: Raaid Bakridi CIPP/US

I. Introduction

In the 21st century, Artificial Intelligence (“AI”) has become an integral part of daily life. From virtual assistants like Siri and Alexa to machine learning algorithms powering recommendation systems,[1] AI is undeniably everywhere;[2] increasingly, it is becoming normalized in daily life.  As U.S. Vice President JD Vance puts it, AI presents an “extraordinary prospect of a new industrial revolution, one on par with the invention of the steam engine.”[3]

AI has also made significant strides in education and academia, offering tools that assist students with research, outlining, essay writing, and even solving complex mathematical and technical problems.[4] However, this convenience comes at a cost. An analysis of AI tutors highlights their potential to enhance education while also raising concerns about overreliance on technology.[5] Rather than using AI as a supplement, many students rely on it to complete their work for them while still receiving credit, which poses challenges to academic integrity and the role of AI in learning.[6] This growing dependence raises concerns about its impact on creativity, critical thinking, overall academic performance, and long-term career prospects. Students are becoming more dependent on AI for their schoolwork, and the potential dangers of this dependency raises significant concerns and implications for their future.[7] If students continue to let AI think for them, the future of our nation will face extreme challenges.

Continue reading

A.I., Facial Recognition, and the New Frontier of Housing Inequality

A.I., Facial Recognition, and the New Frontier of Housing Inequality

By: Caroline Aiello

 

Introduction

“As soon as Ms. Pondexter-Moore steps outside her home, she knows she is being watched.”[1] Schyla Pondexter-Moore is a D.C. resident, and has been living in public housing for over a decade.[2] In 2022, she sued the D.C. Housing Authority for violating her right to privacy, when they forcibly installed advanced surveillance systems in her neighborhood, denied her access to information about the systems, and jailed her overnight while cameras capable of peering into her living room and bedroom were mounted.[3] Schyla is one of over a million public housing residents in the United States.[4] In order to maintain security at these housing complexes, resource-strapped landlords are adopting “landlord tech” to meet their security obligations.[5] Concern for the safety of public housing residents is legitimate and pressing. However, advanced surveillance systems using new features like artificial intelligence are over-surveilling and under-protecting the people they monitor.[6] As jurisdictions in the U.S. and internationally evaluate these systems, key questions emerge about how to balance technological innovation with fundamental principles of respect, dignity, and equity in housing access.

Continue reading