The Varying Scope of the Trade Secret Exception

The Varying Scope of the Trade Secret Exception

By William J. O’Reilly

 

Introduction

            Each of the three state data privacy acts taking effect in 2023 carve out an exception for data that can be considered a “trade secret”.[1]> At first blush any exception raises red flags, but this one may have a big enough impact to justify that trepidation. Many businesses could claim that collecting and making inferences about private data is their “trade”, making them exempt from a citizen seeking to exercise their rights. Further, Data Brokers—who should be the most limited by these laws—likely fit neatly into this exception. While the exact scope of the trade secret exception varies by state, past statutes and case law indicate the trade secret exception will fulfil privacy advocates’ fear. However, this can be an opportunity for judiciaries to change and protect citizen rights by interpreting such an exception narrowly, consistent with the respective legislature’s purpose. This narrow interpretation is necessary for the full protection of privacy rights.

Continue reading

The Hidden Kraken: Submarine Internet Cables and Privacy Protections

PDF Link

The Hidden Kraken: Submarine Internet Cables and Privacy Protections

By Christopher Guay

  1. Introduction

Beyond the existential dread associated with the greatest depths of the oceans, there rests one of the most important components to our modern civilization. No, it’s not the eldritch horrors of the deep, it’s instead the backbone of the internet. Underwater sea cables represent over “95 percent” of international communications traffic.[1] Underwater sea cables are key to how our modern internet connects the world. These cables allow communications from one country to reach another. Instead of relying upon satellites or radio technology, there are physical fiberoptic lines which connect landmasses of the world. That is why someone in the United States can access a British or German website without any major difficulty. At its core,  submarine internet cables allow enormous amounts of commerce and communications to occur almost instantaneously.[2] Ultimately, the regulatory structure in the United States offers both significant benefits and significant dangers on the issue of information privacy.

There are two major issues related to submarine internet cables, one being related to government use of data and the other having to do with corporate use of data. On the first issue, the United States has accessed and surveilled these submarine internet cables.[3] On the second issue, in the United States, there does not appear to be any regulations stopping submarine cable operators from monetizing the information that goes through their cables. This results from a lack of a comprehensive set of privacy regulations similar to the General Data Protection Regulation (GDPR) in the European Union[4] or California’s California Consumer Privacy Act (CCPA/CPRA).[5] The lack of comprehensive privacy regulations allow companies and the government to collect vast amounts of data.[6] Advertising is big business, with a lot of money involved.[7] The global digital advertising industry is estimated to have $438 billion in revenue in 2021.[8]

Continue reading

U.S. v. Google LLC: An overview of the landmark antitrust case and its impact on consumer privacy, A.I., and the future of the internet.

U.S. v. Google LLC: An overview of the landmark antitrust case and its impact on consumer privacy, A.I., and the future of the internet.

By William Simpson

 

I. Intro

The ongoing antitrust case against Google alleging anticompetitive conduct relating to the company’s search engine could, in the near term, result in a breakup of the company or, alternatively, indicate that existing antitrust law is ill-suited to engage outsize market shares in the digital economy.[1] On a broader scale, this case could have major effects on consumer privacy, A.I., and the character of the internet going forward. The consequences could be, in a word, enormous.

 

II. Background

 

In October 2020, the Department of Justice (DOJ) filed a complaint against Google, alleging that Google violated the Sherman Antitrust Act[2] when it:

  • Entered into exclusivity agreements that forbid preinstallation of any competing search service;
  • Entered into tying arrangements that force preinstallation of its search applications in prime locations on mobile devices and make them undeletable;
  • Entered into long-term agreements with Apple that require Google to be the default general search engine on Apple’s popular Safari browser and other Apple search tools; and
  • Generally used monopoly profits to buy preferential treatment for its search engine on devices, web browsers, and other search access points, creating a continuous and self-reinforcing cycle of monopolization.[3]

The DOJ’s complaint concludes that such practices harm competition and consumers, inhibiting innovation where new companies cannot “develop, compete, and discipline Google’s behavior.”[4] In particular, the DOJ argues that Google’s conduct injures American consumers who are subject to Google’s “often-controversial privacy practices.”[5]

In response, Google refutes the DOJ’s argument, deeming the lawsuit “deeply flawed.”[6] “People use Google because they choose to,” says a Google spokesperson, “not because they’re forced to or because they can’t find alternatives.”[7] Challenging the DOJ’s claims, Google asserts that any deals that it entered into are analogous to those a popular cereal brand would enter into for preferential aisle placement.[8]

Continue reading

Privacy in Virtual and Augmented Reality

Privacy in Virtual and Augmented Reality

Devin Forbush, Christopher Guay, & Maggie Shields

A. Introduction

            In this paper, we set out the basics of Augmented and Virtual Reality.  First, we discuss how the technology works and how data is collected.  Second, we analyze what privacy issues arise, and specifically comment on the gravity of privacy concerns that are not contemplated by current laws given the velocity and volume of data that is collected with this technology.  Third, the final section of this paper analyzes how to mitigate these privacy concerns and what regulation of this technology would ideally look like.  Through the past decade, the advent of augmented reality (AR), mixed reality (MR), and virtual reality (VR) has ushered in a new era of human-computer interactivity.  Although the functions of each reality platform vary, the “umbrella term” XR will be used interchangeably to address concerns covering all areas of these emerging technologies.[1]  The gaming community might have initially popularized XR, but now, broad industries and economic sectors seek to impose the new technologies in a variety of contexts: education, healthcare, workplace, and even fitness.[2]

B. Augmented and Virtual Reality Background

Augmented Reality is “an interface that layers digital content on a user’s visual plane.”[3]  It works by overlaying certain images and objects within the users’ current environment.[4]  AR uses a digital layering which superimposes images and objects into their real world environment.[5]  Software developers create AR smartphone applications or products to be worn by users, such as headsets or AR glasses.[6]  In contrast, Virtual Reality seeks to immerse users within an “interactive virtual environment.”[7]  VR seeks to transport the user into a completely new digital environment, or reality where users can interact with, move within, and behave as if they would within the real world.[8]  To enter VR, a user wears a head-mounted device (HMD) which displays a “three-dimensional computer-generated environment.”[9]  Within the environment created, the HMD uses a variety of sensors, cameras, and controls to track and provide sights, sounds, and haptic response to a user’s input.[10]  Mixed reality offers a combination of virtual reality and augmented reality.[11]  In function, mixed reality creates virtual objects superimposed in the real world, and behaves as if they were real objects.[12]

Continue reading

Blackstone’s Acquisition of Ancestry.com

Blackstone’s Acquisition of Ancestry.com

By Zion Mercado

Blackstone is one of the largest investment firms in the world, boasting over $1 trillion in assets under management.[1] In December of 2020, Blackstone acquired Ancestry.com for a total enterprise value of $4.7 billion.[2] Ancestry is a genealogy service that compiles and stores DNA samples from customers and compares them to the DNA samples of individuals whose lineage can be traced back generations to certain parts of the world.[3] Within Ancestry’s privacy statement, Section 7 states that if Ancestry is acquired or transferred, they may share the personal information of its subscribers with the acquiring entity.[4] This provision was brought into controversy in Bridges v. Blackstone by a pair of plaintiffs representing a putative class consisting of anyone who had their DNA and personal information tested and compiled by Ancestry while residing in the State of Illinois.[5] The suit was brought under the Illinois Genetic Information Privacy Act (“GIPA”) which bars a person or company from “disclos[ing] the identity of any person upon whom a genetic test is performed or the results of a genetic test in a manner that permits identification of the subject of the test” without that person’s permission.[6] In addition to barring disclosure, GIPA may also bar third-party disclosure ,[7] which would then create a cause of action under the act against third parties who compel an entity to disclose genetic information such as the information compiled by Ancestry. In Bridges, it is clear from the opinion that there was virtually no evidence that Blackstone in any way compelled Ancestry to disclose genetic information.[8] However, the language of the statute seems to be unclear as to whether third parties who compel a holder of an individual’s genetic information can be held liable under GIPA. What does seem to be clear from the Seventh Circuit’s reading of the statute is that when an entity acquires another entity that holds sensitive personal information or genetic data, the mere acquisition itself is not proof of compelling disclosure within the meaning of the act.[9]

The exact language of GIPA that pertains to potential third party liability states that “[n]o person may disclose or be compelled to disclose [genetic information].”[10] In Bridges, Blackstone contended that the recipient of protected information could not be held liable under GIPA even if they compelled disclosure.[11] The plaintiffs, in their complaint, could not cite to any conduct on the behalf of Blackstone that would satisfy federal pleading standards for stating a claim that Blackstone compelled Ancestry to disclose information covered under GIPA.[12] This led the judge to disregard the broader issue surrounding GIPA’s language brought upon by Blackstone’s argument that an entity who receives genetic information cannot be held liable even if it compels disclosure of such information.[13] This issue is, in essence, one of statutory interpretation. Blackstone would have courts interpret the language reading “no person may . . . be compelled to disclose” as only granting a cause of action against a defendant who discloses genetic information, but only “because they were ‘compelled’ to do so.”[14] However, such an instance is already covered by the first part of the phrase “no person may disclose.”[15] Notably, the Bridges court did not address Blackstone’s interpretation of the statute since the claim failed on the merits, however, the judge writing the opinion did cite a lack of precedent on the matter.[16] I believe that the Illinois legislature did not intend to write a redundancy into the statute, and a more protective reading of the statute would extend liability to a third party who compels disclosure of genetic information. The very meaning of the word “compel” is “to drive or urge forcefully or irresistibly” or “to cause to do or occur by overwhelming pressure.”[17] This is an act that we as people (and hopefully state legislators as well) would presumedly want to limit, especially when what is being compelled is the disclosure of sensitive information, such as the results of a genetic test and the necessary personal information that accompanies the test. Again, in the plaintiff’s complaint, there was no evidence proffered indicating that Blackstone in any way compelled disclosure of genetic information from Ancestry.[18] However, if a case were to arise where such an occurrence did happen, we should hope that courts do not side with Blackstone’s interpretation. Although I agree with the notion that merely acquiring an entity who holds genetic or other sensitive information should not give rise to liability, and a mere recipient of such information should not be held liable when they do not compel the holder’s disclosure, an entity, especially an acquiring entity, should not be shielded from liability when they seek to pressure an entity into disclosing the personal information of individuals who have not consented to such disclosure.

[1] Blackstone’s Second Quarter 2023 Supplemental Financial Data, Blackstone (Jul. 20, 2023), at 16, https://s23.q4cdn.com/714267708/files/doc_financials/2023/q2/Blackstone2Q23 SupplementalFinancialData.pdf.

[2] Blackstone Completes Acquisition of Ancestry, Leading Online Family History Business, for $4.7 Billion, Blackstone (Dec. 4, 2020), https://www.blackstone.com/news/press/blackstone-completes-acquisition-of-ancestry-leading-online-family-history-business-for-4-7-billion/.

[3] Frequently Asked Questions, Ancestry.com, https://www.ancestry.com/c/dna/ancestry-dna-ethnicity-estimate-update?o_iid=110004&o_lid=110004&o_sch=Web+Property&_gl=1*ot1obs*_up*MQ..&gclid=5aadd61f 926315a4ec29b2e4c0d617e8&gclsrc=3p.ds#accordion-ev4Faq (last visited Sep. 8, 2023).

[4] Privacy Statement, Ancestry.com (Jan. 26, 2023), https://www.ancestry.com/c/legal/privacystatement.

[5] Amended Class Action Complaint at 8, Bridges v. Blackstone, No. 21-cv-1091-DWD, 2022 LEXIS (S.D. Ill. Jul. 8, 2022), 2022 WL 2643968, at 2

[6] Ill. Comp. Stat. Ann. 410/30 (LexisNexis 2022).

[7] Id.

[8] See Bridges, 66 F.4th at 689-90.

[9] Id. (“we cannot plausibly infer that a run-of-the-mill corporate acquisition, without more alleged about that transaction, results in a compulsory disclosure”).

[10] 410/30 (LexisNexis 2022).

[11] Bridges, 66 F.4th at 689.

[12] Id. at 690.

[13] Id. at 689.

[14] Brief of the Defendant-Appellee at 41, Bridges v. Blackstone, 66 F.4th 687 (7th Cir. 2023), (No. 22-2486)

[15] 410/30 (LexisNexis 2022).

[16] Bridges, 66 F.4th  at 689 (Scudder, CJ.) (explaining that “[t]he dearth of Illinois precedent examining GIPA makes this inquiry all the more challenging”).

[17] Compel, Merriam-Webster.com, https://www.merriam-webster.com/dictionary/compel (last visited Sep. 9, 2023).

[18] See supra note 11, at 690.

Generative AI Algorithms: The Fine Line Between Speech and Section 230 Immunity

Generative AI Algorithms: The Fine Line Between Speech and Section 230 Immunity

 By Hannah G. Babinski

ABSTRACT

Russian-American writer and philosopher Ayn Rand once observed, “No speech is ever considered, but only the speaker. It’s so much easier to pass judgment on a man than on an idea.”[1] But what if the speaker is not a man, woman, or a human at all? Concepts of speech and identities of speakers have been the focal points of various court cases and debates in recent years. The Supreme Court and various district courts have faced complex and first-of-their-kind questions concerning emerging technologies, namely algorithms and recommendations, and contemplated whether their outputs constitute speech on behalf of an Internet service provider (“Internet platform”) that would not be covered by Section 230 of the Communications Decency Act (“Section 230”).  In this piece, I will examine some of the issues arising from the questions posed by Justice Gorsuch in Gonzalez v. Google, LLC, namely whether generative AI algorithms and their relative outputs constitute speech that is not immunized under Section 230. I will provide an overview of the technology behind generative AI algorithms and then examine the statutory language and interpretation of Section 230, applying that language and interpretive case law to generative AI. Finally, I will provide demonstrative comparisons between generative AI technology and human content creation and foundational Copyright Law concepts to illustrate how generative AI technologies and algorithmic outputs are akin to unique, standalone products that extend beyond the protections of Section 230.

 

Continue Reading 

Adding Insult to Injury: How Article III Standing Minimizes Privacy Harms to Victims and Undermines Legislative Authority

Adding Insult to Injury: How Article III Standing Minimizes Privacy Harms to Victims and Undermines Legislative Authority

By Kristin Hebert, Nicole Onderdonk, Mark A. Sayre, and Deirdre Sullivan

ABSTRACT

            Victims of data breaches and other privacy harms have frequently encountered significant challenges when attempting to pursue relief in the federal courts. Under Article III standing doctrine, plaintiffs must be able to show a concrete and imminent risk of injury. This standard has proved especially challenging to victims of privacy harms, for whom the harm may be more difficult to define or may not yet have occurred (for example, in the case of a data breach where the stolen data has not yet been used). The Supreme Court’s recent decision in TransUnion appears on its fact to erect an even higher barrier for victims of privacy harms to seek relief. In this article, the authors provide a background on Article III standing doctrine and its applicability to cases involving privacy harms. Next, the recent TransUnion decision is discussed in detail, along with an overview of the evidence that TransUnion has failed to resolve the ongoing circuit splits in this area. Finally, the authors propose a test from the Second Circuit as a standard that may be able to resolve the ongoing split and support increased access to the courts for the victims of privacy harms.

 

Continue Reading

 

Implications of New School Surveillance Methods on Student Data Privacy, National Security, Electronic Surveillance, and the Fourth Amendment

Implications of New School Surveillance Methods on Student Data Privacy, National Security, Electronic Surveillance, and the Fourth Amendment

By Amanda Peskin, University of Maryland, Francis King Carey School of Law, Class of 2024

Abstract

Since the Covid-19 pandemic, schools have escalated their use of educational technology to improve students’ in-school and at-home learning. Although educational technology has many educational benefits for students, it has serious implications for students’ data privacy rights. Not only does using technology for educational practices allow schools to surveil their students, but it also avails students to data collection by the educational technology companies. This paper discusses the legal background of surveilling and monitoring student activity, provides the implications surveillance has on technology, equity, and self-expression, and offers several policy-based improvements to better protect students’ data privacy.

Continue reading

Balanced Scrutiny – The Necessity of Adopting a New Standard to Combat the Rising Harm of Invasive Technology

Balanced Scrutiny – The Necessity of Adopting a New Standard to Combat the Rising Harm of Invasive Technology

By Roosevelt S. Bishop, University of Maine School of Law, Class of 2023

ABSTRACT

The current First Amendment jurisprudence of strict scrutiny is wholly insufficient in fostering a healthy legal landscape regarding the freedom of speech in cyberspace. Technology is outpacing the legislative action to address these increasing harms that are prevalent in a society that practically lives online. Consequently, if we, as a society, are to effectively begin addressing the growing danger of the practically protected “expression” of Privacy Invaders, we need to first explore the possibility of a new tier of scrutiny; we need balance. This blueprint for balanced scrutiny will begin by highlighting the harms suffered unequally through the invasion of Intimate Privacy, a term originally coined by premiere privacy scholar Danielle Keats Citron. It will then touch on the historical standing and flexibility of the First Amendment. After edifying how cyber harassment and the First Amendment intersect, this study will conclude by proposing a new standard of judicial review to be utilized when addressing laws targeting cyber expression.  Continue reading

Section 230 and Radicalization Scapegoating

Section 230 and Radicalization Scapegoating

By Hannah G. Babinski, Class of 2024

Standing as one of the few provisions of the Communications Decency Act of 1996 yet to be invalidated by the Court as unconstitutional, 47 U.S.C. § 230 (“Section 230”) has repeatedly been at the center of controversy since its enactment. As the modern world continues to become further dependent on online, electronic communication, such controversy is likely to only grow. Section 230 insulates interactive computer services—think social media websites, chat-boards, and any other website that enables a third-party user of the website to upload a post, text, video, or other medium of expression—from liability stemming from content uploaded to the website by third-party users, even where interactive computer services engage in good-faith content moderation. In this regard, the provision effectively serves to classify the third parties, and not the host website, as the speakers or publishers of content.

Though Section 230 has been instrumental in the development of the internet at large, by preventing needless and substantial litigation and establishing a sense of accountability for individual users in tort generally, the limited language of Section 230 has resulted in several issues of interpretation concerning the line between what actions, specifically content moderation, constitute speech on behalf of the interactive computer service provider and what actions do not. Over the course of the last five years, courts have examined in particular whether algorithms created by and incorporated into the host websites are speech and, thus, unprotected by Section 230.

In Force v. Facebook, Inc., the Court of Appeals for the Second Circuit addressed the question of algorithms as speech in the context a Facebook algorithm that directed radicalized content and other pages openly maintained and associated with the terrorist organization Hamas, a Palestinian radical Islamist organization, to the personalized newsfeeds of several individuals, who then went on to attack five Americans in Israel between 2014 and 2016.[1]

Though the majority opinion ultimately concluded that the algorithm was protected by Section 230 immunity, Chief Judge Katzmann dissented with a well-written and thorough argument against applying Section 230 immunity to such a case. Though I reserve my opinion concerning whether I necessarily agree or disagree with the dissent in Force v. Facebook, Inc., Katzmann verbalizes the key concern with Section 230 as it applies to social media as a whole, stating:

By surfacing ideas that were previously deemed too radical to take seriously, social media mainstreams them, which studies show makes people “much more open” to those concepts. . . . The sites are not entirely to blame, of course—they would not have such success without humans willing to generate and to view extreme content. Providers are also tweaking the algorithms to reduce their pull toward hate speech and other inflammatory material. . . . Yet the dangers of social media, in its current form, are palpable.[2]

This statement goes to the heart of the controversy surrounding not only algorithms, but exposure to harmful or radicalizing content on the internet generally, which is exacerbated by the advent and use of social media platforms; with the expansive and uninhibited nature of the internet ecosystem and social media websites enabling and even facilitating the connection between certain individuals with a proclivity for indoctrination and individuals disseminating radicalized content absent the traditional restrictions of time, language or national borders, it is only natural that greater radicalization has resulted. Does this mean that we, as a society, should hinder communication in order to prevent radicalization?

Proponents of dismantling Section 230 and casting the onus on interactive computer service providers to engage in more rigorous substantive moderation efforts would answer that question in the affirmative. However, rather than waging war on the proverbial middleman and laying blame on communication outlets, we should instead concentrate our efforts on the question, acknowledged by Katzmann, of why humans seem more willing to generate and consume extremist content in the modern age. We, as a society, should take responsibility for the increase in radicalized content and vulnerabilities that are resulting in higher individual susceptibility to radicalization, tackling what inspires the speaker as opposed to the tool of speech.

According to findings of the Central Intelligence Agency (“CIA”) and affirmed by the Federal Bureau of Investigation (“FBI”), certain vulnerabilities are almost always present in any violent extremist, regardless of ideology or affiliation; these vulnerabilities include “feeling alone or lacking meaning and purpose in life, being emotionally upset after a stressful event, disagreeing with government policy, not feeling valued or appreciated by society, believing they have limited chances to succeed, [and] feeling hatred toward certain types of people.”[3] As these vulnerabilities are perpetuated by repeated societal and social failures, the number of susceptible individuals will continue to climb.

What’s more, these predispositions are not novel to the age of social media. Undoubtedly, throughout history, we have seen the proliferation of dangerous cults and ideological organizations that radicalize traditional beliefs, targeting the dejected and the isolated in society. For example, political organizations like the National Socialist German Workers’ Party, more infamously known as the NAZI party; Christianity-based cults and hate organizations like the People’s Temple, Children of God, Branch Davidians, and the Klu Klux Klan; and Buddhist-inspired terrorism groups like Aum Shinrikyo have four things in common: 1) they radicalized impressionable individuals, many of whom experienced some of the vulnerabilities cited above, 2) they brought abuse/harm/death to members, 3) they facilitated and encouraged abuse/harm/death to nonmembers, and 4) they reached popularity and obtained their initial members without the help of algorithmic recommendations and social media exposure.

The point is that social media is not to blame for radicalization. Facebook and YouTube’s code-based algorithms that serve to connect individuals with similar interests on social networking sites or organize content based on individualized past video consumption are not to blame for terrorism. We are.

[1] Force v. Facebook, Inc., 934 F. 3d 53 (2d Cir. 2019).

[2] Id.

[3] Cathy Cassasta, Why Do People Become Extremists?, Healthline (updated Sept. 18, 2017), https://www.healthline.com/health-news/why-do-people-become-extremists (last visited Feb. 26, 2023).