Implications of New School Surveillance Methods on Student Data Privacy, National Security, Electronic Surveillance, and the Fourth Amendment

Implications of New School Surveillance Methods on Student Data Privacy, National Security, Electronic Surveillance, and the Fourth Amendment

By Amanda Peskin, University of Maryland, Francis King Carey School of Law, Class of 2024

Abstract

Since the Covid-19 pandemic, schools have escalated their use of educational technology to improve students’ in-school and at-home learning. Although educational technology has many educational benefits for students, it has serious implications for students’ data privacy rights. Not only does using technology for educational practices allow schools to surveil their students, but it also avails students to data collection by the educational technology companies. This paper discusses the legal background of surveilling and monitoring student activity, provides the implications surveillance has on technology, equity, and self-expression, and offers several policy-based improvements to better protect students’ data privacy.

Continue reading

Balanced Scrutiny – The Necessity of Adopting a New Standard to Combat the Rising Harm of Invasive Technology

Balanced Scrutiny – The Necessity of Adopting a New Standard to Combat the Rising Harm of Invasive Technology

By Roosevelt S. Bishop, University of Maine School of Law, Class of 2023

ABSTRACT

The current First Amendment jurisprudence of strict scrutiny is wholly insufficient in fostering a healthy legal landscape regarding the freedom of speech in cyberspace. Technology is outpacing the legislative action to address these increasing harms that are prevalent in a society that practically lives online. Consequently, if we, as a society, are to effectively begin addressing the growing danger of the practically protected “expression” of Privacy Invaders, we need to first explore the possibility of a new tier of scrutiny; we need balance. This blueprint for balanced scrutiny will begin by highlighting the harms suffered unequally through the invasion of Intimate Privacy, a term originally coined by premiere privacy scholar Danielle Keats Citron. It will then touch on the historical standing and flexibility of the First Amendment. After edifying how cyber harassment and the First Amendment intersect, this study will conclude by proposing a new standard of judicial review to be utilized when addressing laws targeting cyber expression.  Continue reading

Section 230 and Radicalization Scapegoating

Section 230 and Radicalization Scapegoating

By Hannah G. Babinski, Class of 2024

Standing as one of the few provisions of the Communications Decency Act of 1996 yet to be invalidated by the Court as unconstitutional, 47 U.S.C. § 230 (“Section 230”) has repeatedly been at the center of controversy since its enactment. As the modern world continues to become further dependent on online, electronic communication, such controversy is likely to only grow. Section 230 insulates interactive computer services—think social media websites, chat-boards, and any other website that enables a third-party user of the website to upload a post, text, video, or other medium of expression—from liability stemming from content uploaded to the website by third-party users, even where interactive computer services engage in good-faith content moderation. In this regard, the provision effectively serves to classify the third parties, and not the host website, as the speakers or publishers of content.

Though Section 230 has been instrumental in the development of the internet at large, by preventing needless and substantial litigation and establishing a sense of accountability for individual users in tort generally, the limited language of Section 230 has resulted in several issues of interpretation concerning the line between what actions, specifically content moderation, constitute speech on behalf of the interactive computer service provider and what actions do not. Over the course of the last five years, courts have examined in particular whether algorithms created by and incorporated into the host websites are speech and, thus, unprotected by Section 230.

In Force v. Facebook, Inc., the Court of Appeals for the Second Circuit addressed the question of algorithms as speech in the context a Facebook algorithm that directed radicalized content and other pages openly maintained and associated with the terrorist organization Hamas, a Palestinian radical Islamist organization, to the personalized newsfeeds of several individuals, who then went on to attack five Americans in Israel between 2014 and 2016.[1]

Though the majority opinion ultimately concluded that the algorithm was protected by Section 230 immunity, Chief Judge Katzmann dissented with a well-written and thorough argument against applying Section 230 immunity to such a case. Though I reserve my opinion concerning whether I necessarily agree or disagree with the dissent in Force v. Facebook, Inc., Katzmann verbalizes the key concern with Section 230 as it applies to social media as a whole, stating:

By surfacing ideas that were previously deemed too radical to take seriously, social media mainstreams them, which studies show makes people “much more open” to those concepts. . . . The sites are not entirely to blame, of course—they would not have such success without humans willing to generate and to view extreme content. Providers are also tweaking the algorithms to reduce their pull toward hate speech and other inflammatory material. . . . Yet the dangers of social media, in its current form, are palpable.[2]

This statement goes to the heart of the controversy surrounding not only algorithms, but exposure to harmful or radicalizing content on the internet generally, which is exacerbated by the advent and use of social media platforms; with the expansive and uninhibited nature of the internet ecosystem and social media websites enabling and even facilitating the connection between certain individuals with a proclivity for indoctrination and individuals disseminating radicalized content absent the traditional restrictions of time, language or national borders, it is only natural that greater radicalization has resulted. Does this mean that we, as a society, should hinder communication in order to prevent radicalization?

Proponents of dismantling Section 230 and casting the onus on interactive computer service providers to engage in more rigorous substantive moderation efforts would answer that question in the affirmative. However, rather than waging war on the proverbial middleman and laying blame on communication outlets, we should instead concentrate our efforts on the question, acknowledged by Katzmann, of why humans seem more willing to generate and consume extremist content in the modern age. We, as a society, should take responsibility for the increase in radicalized content and vulnerabilities that are resulting in higher individual susceptibility to radicalization, tackling what inspires the speaker as opposed to the tool of speech.

According to findings of the Central Intelligence Agency (“CIA”) and affirmed by the Federal Bureau of Investigation (“FBI”), certain vulnerabilities are almost always present in any violent extremist, regardless of ideology or affiliation; these vulnerabilities include “feeling alone or lacking meaning and purpose in life, being emotionally upset after a stressful event, disagreeing with government policy, not feeling valued or appreciated by society, believing they have limited chances to succeed, [and] feeling hatred toward certain types of people.”[3] As these vulnerabilities are perpetuated by repeated societal and social failures, the number of susceptible individuals will continue to climb.

What’s more, these predispositions are not novel to the age of social media. Undoubtedly, throughout history, we have seen the proliferation of dangerous cults and ideological organizations that radicalize traditional beliefs, targeting the dejected and the isolated in society. For example, political organizations like the National Socialist German Workers’ Party, more infamously known as the NAZI party; Christianity-based cults and hate organizations like the People’s Temple, Children of God, Branch Davidians, and the Klu Klux Klan; and Buddhist-inspired terrorism groups like Aum Shinrikyo have four things in common: 1) they radicalized impressionable individuals, many of whom experienced some of the vulnerabilities cited above, 2) they brought abuse/harm/death to members, 3) they facilitated and encouraged abuse/harm/death to nonmembers, and 4) they reached popularity and obtained their initial members without the help of algorithmic recommendations and social media exposure.

The point is that social media is not to blame for radicalization. Facebook and YouTube’s code-based algorithms that serve to connect individuals with similar interests on social networking sites or organize content based on individualized past video consumption are not to blame for terrorism. We are.

[1] Force v. Facebook, Inc., 934 F. 3d 53 (2d Cir. 2019).

[2] Id.

[3] Cathy Cassasta, Why Do People Become Extremists?, Healthline (updated Sept. 18, 2017), https://www.healthline.com/health-news/why-do-people-become-extremists (last visited Feb. 26, 2023).

 

“You Have the Right to Remain Silent(?)”: An Analysis of Courts’ Inconsistent Treatment of the Various Means to Unlock Phones in Relation to the Right Against Self-Incrimination

“You Have the Right to Remain Silent(?)”: An Analysis of Courts’ Inconsistent Treatment of the Various Means to Unlock Phones in Relation to the Right Against Self-Incrimination

By Thomas E. DeMarco, University of Maryland Francis King Carey School of Law, Class of 2023[*]

Riley and Carpenter are the most recent examples of the Supreme Court confronting new challenges technology presents to its existing doctrines surrounding privacy issues. But while the majority of decisions focus on Fourth Amendment concerns regarding questions of unreasonable searches, far less attention has been given to Fifth Amendment concerns. Specifically, how does the Fifth Amendment’s protections against self-incrimination translate to a suspect’s right to refuse to unlock their device for law enforcement to search and collect evidence from? Additionally, how do courts distinguish between various forms of unlocking devices, from passcodes to facial scans?

Continue reading

The Double-Edged Promise of Cryptocurrency: How Innovation Creates New Vulnerabilities and How Government Oversight Can Reduce Crypto Crime

The Double-Edged Promise of Cryptocurrency: How Innovation Creates New Vulnerabilities and How Government Oversight Can Reduce Crypto Crime

By Jason H. Meuse, University of Maine School of Law, Class of 2023

Abstract

The fallout from the FTX fraud scheme brought the dangers of crypto front-and-center. Not only did FTX perpetrate a massive fraud, but its fall exposed the cryptocurrency exchange to hacking resulting in the theft of over $477 million in crypto assets. This theft is not isolated to FTX; by October 2022, hackers had already stolen over $3 billion. In addition, new organizational structure and technology in the crypto industry has introduced new vulnerabilities. Cryptocurrency exchanges, decentralized exchanges, and cross-chain bridges are prime targets for hackers to both steal and launder crypto assets. Part of the reason these technologies leave assets vulnerable is that they undermine a central premise of crypto: a currency system accountable to users within a closed ecosystem. While the industry has responded by increasing its security standards and procedures, its anti-government attitude has inhibited cooperation with government that could make the crypto marketplace even more secure. Many firms are incorporated outside of U.S. jurisdiction, lightening the compliance burden at the cost of security. However, establishing industry security standards and cooperating with the government can lead to higher security and greater consumer confidence.

Continue reading

Rethinking the Government’s Role in Private Sector Cybersecurity

Rethinking the Government’s Role in Private Sector Cybersecurity

By Devon H. Draker, University of Maine School of Law, class of 2023 [1]

Abstract

Cyber-attacks on the private sector through the theft of trade secrets and ransomware attacks threaten US interests at a federal level by undermining US economic competitiveness and funding groups with interests adverse to those of the US. The federal government can regulate cyberspace under the Commerce Clause, but the current cybersecurity regulatory landscape is ineffective in addressing these harms. It is ineffective because legislation is either bad-actor focused and punishes the proverbial “hacker,” which has no teeth due to jurisdictional reach limitations, or because it attempts to punish the victim-company in hopes of motivating the development of sufficient safeguards. The missing puzzle piece in solving this issue is “intelligence.” Intelligence in military terms is the process of combining information to create an actionable plan that anticipates what the enemy will do based on operational factors. The utility of intelligence in cyberspace is that it provides companies the ability to anticipate not only when they may be attacked based on trends in their sector, but also what methods would likely be used to carry out the attack. There are two ways that cybersecurity intelligence could be achieved. The first approach involves integrating cybersecurity units from the United States Military into the private sector to collect information on attacks and provide intelligence to private sector companies based on this information gathering. This approach also allows the US Military to continue its proficiency in the cyberspace domain, which is a rising concern for US military leaders. The second approach involves expanding the Cybersecurity and Infrastructure Security Agency’s (CISA) regulatory powers to enact mandatory reporting regulations for more than just “critical infrastructure.” Each approach has its own drawbacks, but both offer significant advantages as compared to the current regulatory landscape.

 

Continue reading

Digitizing the Fourth Amendment: Privacy in the Age of Big Data Policing

Written by Charles E. Volkwein

ABSTRACT

Today’s availability of massive data sets, inexpensive data storage, and sophisticated analytical software has transformed the capabilities of law enforcement and created new forms of “Big Data Policing.” While Big Data Policing may improve the administration of public safety, these methods endanger constitutional protections against warrantless searches and seizures. This Article explores the Fourth Amendment consequences of Big Data Policing in three parts. First, it provides an overview of Fourth Amendment jurisprudence and its evolution in light of new policing technologies. Next, the Article reviews the concept of “Big Data” and examines three forms of Big Data Policing: Predictive Policing Technology (PPT); data collected by third-parties and purchased by law enforcement; and geofence warrants. Finally, the Article concludes with proposed solutions to rebalance the protections afforded by the Fourth Amendment against these new forms of policing.

Continue reading

Say “Bonjour” to New Blanket Privacy Regulations?

The FTC Considers Tightening the Leash on the Commercial Data Free-for-All and Loose Data Security Practices in an Effort to Advance Toward a Framework More Akin to the GDPR

By Hannah Grace Babinski, class of 2024

On August 11, 2022, the Federal Trade Commission (FTC) issued an Advance Notice of Proposed Rulemaking (ANPR) concerning possible rulemaking surrounding “commercial surveillance” and “lax data security practices”[1] and established a public forum date of September 8, 2022.[2] The FTC’s specific objective for issuing this ANPR is to obtain public input concerning “whether [the FTC] should implement new trade regulation rules or other regulatory alternatives concerning the ways in which companies (1) collect, aggregate, protect, use, analyze, and retain consumer data, as well as (2) transfer, share, sell, or otherwise monetize that data in ways that are unfair or deceptive.”[3]

Continue reading

Revenge Porn: The Result of a Lack of Privacy in an Internet-Based Society

Comment

By Shelbie Marie Mora, Class of 2023

I. Introduction

 Nonconsensual pornography, also referred to as revenge porn, is “the distribution of sexual or pornographic images of individuals without their consent.”[1] Forty-six U.S. states, the District of Columbia, and the U.S. territory of Puerto Rico have adopted revenge porn laws. However, there is no federal law in place that prohibits revenge porn. Several countries around the world have chosen to adopt revenge porn statutes to protect individuals’ privacy rights and prevent emotional and financial harm. Revenge porn is primarily a large issue for women given that they are overwhelmingly the target of it.[2] Major ramifications can amount to victims who have had their intimate images posted online without their consent.

In this paper, I will discuss the rise of revenge porn websites, examine Texas and Vermont’s revenge porn statutes, review case law for each state, and analyze the detriments that the holdings pose to victims of revenge porn. I will next examine Australia, Puerto Rico, and Canada’s revenge porn laws and the penalties imposed for offenders. Lastly, I will assess a failed proposed federal revenge porn law in the United States, discuss where the U.S. falls short on federal legislation, and propose remedies to help protect the privacy of individuals. The United States falls short in revenge porn legislation and must pass a federal law to promote and protect the privacy of Americans and deter this crime.

Continue reading

Life’s Not Fair. Is Life Insurance?

The rapid adoption of artificial intelligence techniques by life insurers poses increased risks of discrimination, and yet, regulators are responding with a potentially unworkable state-by-state patchwork of regulations. Could professional standards provide a faster mechanism for a nationally uniform solution?

By Mark A. Sayre, Class of 2024

Introduction

Among the broad categories of insurance offered in the United States, individual life insurance is unique in a few key respects that make it an attractive candidate for the adoption of artificial intelligence (AI).[1] First, individual life insurance is a voluntary product, meaning that individuals are not required by law to purchase it in any scenario.[2] As a result, in order to attract policyholders, life insurers must convince customers not only to choose their company over other companies but also convince customers to choose their product over other products that might compete for a share of discretionary income (such as the newest gadget or a family vacation). Life insurers can, and do, argue that these competitive pressures provide natural constraints on the industry’s use of practices that the public might view as burdensome, unfair or unethical and that such constraints reduce the need for heavy-handed regulation.[3]

Continue reading