Data Sovereignty in the Age of Digital Nationalism: The Case of TikTok and the Global Fragmentation of the Internet

Data Sovereignty in the Age of Digital Nationalism: The Case of TikTok and the Global Fragmentation of the Internet

Aysha Vear

 

I. Introduction

Social media has significantly changed the ways in which individuals both receive information and exchange it. As these applications and platforms have increasingly become part of the everyday lives of citizens and further incorporated into their daily interactions, the issue of social media regulation has been a clear focal point of legal and political discourse. Today there exists a growing concern about American citizens’ data with respect to Chinese influence and intrusion. Consequently, the House of Representatives presented a bill in 2024 to mitigate these fears. H.R. 7521 would force the foreign ownership of TikTok, a social media platform controlled by Chinese parent company ByteDance, to divest or face a broad federal ban.[1]

TikTok is centered on short videos created and uploaded by users who are able to create, share and interact with networks of content,[2] and it has quickly become one of the most popular apps in the United States.[3]  It is “a mass marketplace of trends and ideas and has become a popular news source for young people”[4] with sixty-two percent of eighteen to twenty-nine year olds saying that they use the app[5] which reached a billion users in 2021.[6]  The app got its start in the U.S. as an app called “Musical.ly” but was acquired by the Chinese company ByteDance in 2018 and rebranded as TikTok.[7] ByteDance is headquartered in Beijing and it launched “Douyin,” the Chinese TikTok equivalent in 2016 prior to the “Musical.ly” acquisition. It is this affiliation with China and the Chinese app that flagged concern for United States government officials and this case represents a growing trend of national governments asserting greater control over digital platforms and the content which citizens consume.

This highlights a growing trend toward countries treating data governance as a national security issue. Data sovereignty is a concept that refers to “a state’s sovereign power to regulate not only cross-border flow of data through uses of internet filtering technologies and data localization mandates, but also speech activities . . . and access to technologies.”[8] Governments are introducing laws to prevent foreign control over citizen data, such as China’s Data Security Law and India’s restriction on data localization. Given that these laws have different aims and approaches to governance as well as shifting priorities, they have increased geopolitical competition between the U.S., China, and the EU. While data sovereignty is a necessary framework for global internet governance, its implementation must balance security concerns with the need to prevent a fragmentation of the internet as we know it. More countries are scrambling to control the flow of data in and out of their national borders and, as such, “the rise in data localization policies has been a contributing factor in declining internet freedom.”[9] This paper will explore the different approaches of the United States, China, and the European Union in controlling cross-border data flows. Next, looking through a specific lens at the TikTok forced divestiture and attacks on other Chinese entities, it will explore the growing trend of data sovereignty and attempt to find the balance in national security and digital openness. Finally, the paper will suggest possible solutions for the growing need for better collaboration in the digital sphere.

Continue reading

Spoiled for Choice: AI Regulation Possibilities

Spoiled for Choice: AI Regulation Possibilities

William O’Reilly

 

I. Introduction

Americans want innovation and they believe advancing AI benefits everyone.[1] One solution to encourage this is to roll back regulations.[2] Unfortunately, part and parcel with the innovations are several harms that are likely to result from the inappropriate use of personal and proprietary data and AI decision-making.[3]  There is an option to ignore this potential harm and halt regulations to encourage the spread of personal information.[4] This option is not in the best interest of the country because the U.S. is already losing the innovation race in some respects. Innovation can still occur despite heavy regulations. Virginia is the latest state to pursue the “no regulation” strategy, and it provides a good microcosm to highlight the challenges and advantages of this approach.[5] Virginia’s absence of regulation falls on a spectrum of legislation that demonstrates options for states to protect rights and innovation. As this article discusses further, curbing AI regulation on companies will not advance innovation enough to justify the civil rights violations perpetuated by current AI use.

Continue reading

Privacy and Free Speech in the Age of the Ever-Present Border

Privacy and Free Speech in the Age of the Ever-Present Border

Viv Daniel

 

I. Introduction and Legal Background

On his first day in office, President Trump signed Executive Order 1461 (EO 1461), titled “Protecting the United States from Foreign Terrorists and Other National Security and Public Safety Threats.”[1] The Order, as the name might suggest, directs executive agencies to coordinate to enhance screening for foreign nationals coming to, or living within, the United States.[2] The Order instructs these agencies to ensure that non-citizens “are vetted and screened to the maximum degree possible.”[3]

To enforce the provisions of the Order, U.S. Citizenship and Immigration Services (USCIS) has put forward a proposed rule, with comments open until May 5th, to require non-citizens to disclose all of their social media usernames when filling out forms to access immigration benefits.[4] USCIS says it will then use this information to enhance identity verification, vet and screen for national security, and conduct generalized immigration inspections under its purview.[5]

This is not the first time something like this has happened. In 2019 under the previous Trump administration, Visa applicants were required to register all recent social media accounts with the government as part of the application,[6] a rule which was upheld when a District Judge for the District of Columbia dismissed a case challenging it.[7]

President Trump vests EO 1461 in his executive authority under the Immigration and Nationality Act (INA).[8] The Act, passed in 1952, was heavily amended in 1996 by the Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA) to retroactively make harsher the immigration consequences of certain conduct.[9] Although terrorism as-such was not implicated in the act, the update to the INA was partially motivated by a need to respond to the 1993 World Trade Center Bombings, and violent and conspiratorial conduct which could constitute terrorism was covered by the act.[10]

Although IIRIRA drastically expanded the number of deportable immigrants in the U.S. overnight, subjecting many non-citizens to removal proceedings over minor infractions committed decades ago,[11] the act did not go so far as to explicitly punish noncitizens for their free speech.[12] The executive authority now claimed under the Act to monitor social media, however, aligns with a troubling trend which may change this norm.

Continue reading

LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy

LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy

Steve Hammerton

 

I. Introduction

There’s a lethal autonomous elephant in the room, and it’s only minimally regulated by DoD Directive 3000.09 (“DODD 3000.09”).[1] Under that directive, lethal autonomous weapon systems (LAWS) are said to be “weapon system[s] that, once activated, can select and engage targets without further intervention by an operator.”[2] In contrast to other nations who have called for an outright ban on such systems, the United States has resisted.[3] Instead, the Department of Defense (“DoD”) has required that LAWS, like all other weapons systems, “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[4] A quick read of this policy would suggest that it requires a human-in-the-loop. However, a more exacting analysis of the language reveals that it only requires “human judgment over the use of force” which only seems to refer to broad themes of lethality like when and where it will be deployed, but not against whom. The directive also refers to an inchoate review process that does not spell out a clear framework for assessing the efficacy and safety of LAWS.[5] Without a clearer statement on the “appropriate levels of human judgment,” the lack of distinction in targeting conflicts with the two core jus in bello principles, distinction and proportionality.[6]

At the same time, LAWS may offer a comparative advantage over human trigger pullers. Canadian think-tank Centre for International Governance Innovation suggests that LAWS “may be able to assess a target’s legitimacy and make decisions faster and with more accuracy and objectivity than fallible human actors could.”[7] Simply put, LAWS could reduce unintended errors or deliberate unlawful killings. Indeed, technology-assisted precision weapons have already reduced collateral damage in armed conflicts.[8] Recent conflicts have been marked by an increased use of autonomous and AI-assisted weaponry, though it is too early to say whether the use of these weapons has identifiably reduced unintended civilian casualties.[9] With the increasing shift to LAWS and other AI-assisted weapons, it seems unrealistic to expect an outright ban. Consequently, the United States and its international partners should seek to preserve distinction and proportionality through a meaningful and complex review, such as a risk-benefit analysis, that recognizes the inherent dangers of using LAWS but appreciates the potential for harm reduction.

Continue reading

The Growing Dependency on AI in Academia

The Growing Dependency on AI in Academia

By: Raaid Bakridi CIPP/US

I. Introduction

In the 21st century, Artificial Intelligence (“AI”) has become an integral part of daily life. From virtual assistants like Siri and Alexa to machine learning algorithms powering recommendation systems,[1] AI is undeniably everywhere;[2] increasingly, it is becoming normalized in daily life.  As U.S. Vice President JD Vance puts it, AI presents an “extraordinary prospect of a new industrial revolution, one on par with the invention of the steam engine.”[3]

AI has also made significant strides in education and academia, offering tools that assist students with research, outlining, essay writing, and even solving complex mathematical and technical problems.[4] However, this convenience comes at a cost. An analysis of AI tutors highlights their potential to enhance education while also raising concerns about overreliance on technology.[5] Rather than using AI as a supplement, many students rely on it to complete their work for them while still receiving credit, which poses challenges to academic integrity and the role of AI in learning.[6] This growing dependence raises concerns about its impact on creativity, critical thinking, overall academic performance, and long-term career prospects. Students are becoming more dependent on AI for their schoolwork, and the potential dangers of this dependency raises significant concerns and implications for their future.[7] If students continue to let AI think for them, the future of our nation will face extreme challenges.

Continue reading

A.I., Facial Recognition, and the New Frontier of Housing Inequality

A.I., Facial Recognition, and the New Frontier of Housing Inequality

By: Caroline Aiello

 

Introduction

“As soon as Ms. Pondexter-Moore steps outside her home, she knows she is being watched.”[1] Schyla Pondexter-Moore is a D.C. resident, and has been living in public housing for over a decade.[2] In 2022, she sued the D.C. Housing Authority for violating her right to privacy, when they forcibly installed advanced surveillance systems in her neighborhood, denied her access to information about the systems, and jailed her overnight while cameras capable of peering into her living room and bedroom were mounted.[3] Schyla is one of over a million public housing residents in the United States.[4] In order to maintain security at these housing complexes, resource-strapped landlords are adopting “landlord tech” to meet their security obligations.[5] Concern for the safety of public housing residents is legitimate and pressing. However, advanced surveillance systems using new features like artificial intelligence are over-surveilling and under-protecting the people they monitor.[6] As jurisdictions in the U.S. and internationally evaluate these systems, key questions emerge about how to balance technological innovation with fundamental principles of respect, dignity, and equity in housing access.

Continue reading

It Will Take A Village To Ensure An Authentic Future For Generation Beta

It Will Take a Village to Ensure an Authentic Future for Generation Beta

By: Susan-Caitlyn Seavey

 

Introduction

One of the many glaring issues that future generations will face is the decline in frequency of in-person human interactions. Today’s technology, especially artificial intelligence (AI) offers unparalleled tools that can be used for the betterment and progression of humanity. For example, new customer service bots, called “conversational agents” are responding to customer inquiries with efficient, personalized and human-like responses, “reshaping how we engage with [ ] companies, [and] creating a world where efficiency meets empathy–or at least an impressively convincing facsimile of it.”[1] AI software is also providing efficiency for individuals through multitasking functions, auto-generated answers to questions, and draft responses to texts and emails, saving the user valuable time. However, this technology can also create unrealistic standards and attractive environments that isolate individuals from their reality. Around the globe, AI technology is becoming more normalized and ubiquitous with software like co-pilot in the workplace and AI robots as companions, friends and romantic partners at home. The rapid development is “particularly concerning given its novelness, the speed and autonomy at which the technology can operate, and the frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed.”[2] We face the challenge of balancing the benefits of efficiency and progression of this technology with the risk of being fully consumed by it, and at the cost of our youngest members of society.

This powerful technology should be used to embrace reality and continue striving for a better world; one that actually exists off of a screen. Jennifer Marsnik summarized this challenge well by contemplating how society can “maintain authenticity, human intelligence and personal connection in a landscape increasingly dominated by algorithms, data and automation.”[3] Young minds are the most susceptible to the unrealistic standards and depictions AI can create. Considering the difficulty even adults can sometimes have when determining whether a visual is real or generated by AI, the young generations with their still-developing minds will evolve in this landscape of not always knowing what is authentic and what is not. If society fails to provide safeguards and implement protections around children and their use of our ever-progressing technology, we could end up with future generations being stuck in a perpetual a cycle of unrealistic expectations and disappointment in the real world, prompting more isolation and leading to the degradation of communities. Preserving authentic relationships and interactions with the real world will require a village: Congress must support new and developing legislation for online safety for children, companies should adopt management frameworks and clearinghouse functions to ensure transparency and accountability in consumer engagement, and parents, teachers, and community leaders must work together to encourage social-emotional learning and in-person interactions in children and teens at home, at school, and in their communities.

Continue reading

Privacy in Death: Conserving your Power in Legacy

Privacy in Death: Conserving your Power in Legacy

Gabriel Siwady-Kattan

 

Introduction

Throughout our lives, we store everything online. This means that not only can a person keep physical assets in a bank; they can also have digital assets available online for access and distribution. Who should be able to access those assets when we die? The IRS defines a digital asset as “a digital representation of value recorded on a cryptographically secure distributed ledger or similar technology” and names as examples convertible virtual currency and cryptocurrency, stablecoins, and Non-Fungible Tokens (NFTs).[1] The IRS further elaborates that “[i]f a particular asset has characteristics of a digital asset, [then] it’s treated as one for federal income tax purposes.”[2] Beyond digital assets that have a financial component to them, however, are also images, videos, digital documents, and electronically-stored music. These could be held by any person, and in our modern age, most people have an account where their digital information is stored, whether in an Apple, Google, Facebook, or Instagram account. The existence of digital assets has brought many issues, including how to deal with the distribution of digital assets at the time of death.

To deal with this issue, the Uniform Law Commission (ULC) drafted the Uniform Fiduciary Access to Digital Access Act (hereinafter referred to as the Digital Assets Act).[3] This Act essentially treated digital assets as it would any other kind of traditional property a person held at the time of their death.[4] This meant that an executor had near unsupervised power to access, manage, and distribute a decedent’s digital assets.[5] Under the Digital Assets Act, an executor had the same access to digital assets as an owner had at the time of their death.[6]

Naturally, this “open-access approach” could raise personal privacy concerns. What if, in the process of getting a decedent’s affairs in order, an executor came across communications with a third party? What if that communication shed light on an unknown aspect of the deceased’s life? What if that communication was meant to remain confidential? And what about that third party’s identity?

On top of these personal privacy concerns, the Digital Assets Act’s provisions were contrary to some tech companies’ terms of use agreements. For example, tech companies have their own ways of managing the content on their platform, and often control or limit the agency a user or consumer might have over their own communications. To this end, tech companies almost always require users to agree to a terms of use agreement, which typically includes provisions on how and to whom data may be shared.

Continue reading

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Emily Burns

 

Introduction 

The Freedom of Information Act (“FOIA”) is the principal mechanism that allows people to request records held by agencies within the Federal government.[1] In the immigration context, a very common type of FOIA record request is for an A-file, which is a record of every interaction between a non-citizen and an immigration related federal agency.[2]

For people in immigration proceedings, obtaining an A-File allows noncitizens and their lawyers to access information crucial to defending against deportation or gaining immigration benefits, such as entry and exit dates from the United States, copies of past applications submitted to Federal agencies, or statements made to U.S. officials.[3] To obtain an A-File, non-citizens must affirmatively request the file through FOIA from an agency such as United States Citizenship and Immigration Services (USCIS) or Immigration and Customs Enforcement (ICE).[4] However, one carve-out to this process exists, available only in the Ninth Circuit: Dent motions.[5] Dent motions exist due to the case of Dent v. Holder, where the Ninth Circuit recognized that the government violated Sazar Dent’s right to Due Process when it required Mr. Dent to request his A-File through FOIA rather than summarily handing the file over to him when requested in a prior court proceeding.[6]

Continue reading

Honesty is the Best (Privacy) Policy: The Importance of Transparency in Disclosing Data Collection for AI Training

Honesty is the Best (Privacy) Policy: The Importance of Transparency in Disclosing Data Collection for AI Training

Alexandra Logan

 

Introduction

This past July, the Federal Trade Commission (“FTC”), Department of Justice, and a number of international antitrust enforcers issued a Joint Statement on Competition in Generative AI Foundation Models and AI Products. The Joint Statement details that “[f]irms that deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy . . . it is important that consumers are informed . . . about when and how an AI application is employed in the products and services they purchase or use.” Alleged unfair and deceptive acts or practices (“UDAP”) can be investigated by the FTC via Section 5 of the FTC Act.[2] Consumers are looking for more ways to limit the ability of companies to collect and use their data for AI training purposes,[3] and companies should be vigilant in ensuring their privacy policies are up to date and thorough. If companies can keep their privacy policies up-to-date, this can help them to avoid making deceptive or misrepresentative claims about the data that they collect or what they do with it. Recently, X and LinkedIn have come under fire by consumers because of the companies’ data collection practices, and their ambiguous representations and omissions about how they use consumer data.

Continue reading