The Varying Scope of the Trade Secret Exception

The Varying Scope of the Trade Secret Exception

By William J. O’Reilly

 

Introduction

            Each of the three state data privacy acts taking effect in 2023 carve out an exception for data that can be considered a “trade secret”.[1]> At first blush any exception raises red flags, but this one may have a big enough impact to justify that trepidation. Many businesses could claim that collecting and making inferences about private data is their “trade”, making them exempt from a citizen seeking to exercise their rights. Further, Data Brokers—who should be the most limited by these laws—likely fit neatly into this exception. While the exact scope of the trade secret exception varies by state, past statutes and case law indicate the trade secret exception will fulfil privacy advocates’ fear. However, this can be an opportunity for judiciaries to change and protect citizen rights by interpreting such an exception narrowly, consistent with the respective legislature’s purpose. This narrow interpretation is necessary for the full protection of privacy rights.

Continue reading

U.S. v. Google LLC: An overview of the landmark antitrust case and its impact on consumer privacy, A.I., and the future of the internet.

U.S. v. Google LLC: An overview of the landmark antitrust case and its impact on consumer privacy, A.I., and the future of the internet.

By William Simpson

 

I. Intro

The ongoing antitrust case against Google alleging anticompetitive conduct relating to the company’s search engine could, in the near term, result in a breakup of the company or, alternatively, indicate that existing antitrust law is ill-suited to engage outsize market shares in the digital economy.[1] On a broader scale, this case could have major effects on consumer privacy, A.I., and the character of the internet going forward. The consequences could be, in a word, enormous.

 

II. Background

 

In October 2020, the Department of Justice (DOJ) filed a complaint against Google, alleging that Google violated the Sherman Antitrust Act[2] when it:

  • Entered into exclusivity agreements that forbid preinstallation of any competing search service;
  • Entered into tying arrangements that force preinstallation of its search applications in prime locations on mobile devices and make them undeletable;
  • Entered into long-term agreements with Apple that require Google to be the default general search engine on Apple’s popular Safari browser and other Apple search tools; and
  • Generally used monopoly profits to buy preferential treatment for its search engine on devices, web browsers, and other search access points, creating a continuous and self-reinforcing cycle of monopolization.[3]

The DOJ’s complaint concludes that such practices harm competition and consumers, inhibiting innovation where new companies cannot “develop, compete, and discipline Google’s behavior.”[4] In particular, the DOJ argues that Google’s conduct injures American consumers who are subject to Google’s “often-controversial privacy practices.”[5]

In response, Google refutes the DOJ’s argument, deeming the lawsuit “deeply flawed.”[6] “People use Google because they choose to,” says a Google spokesperson, “not because they’re forced to or because they can’t find alternatives.”[7] Challenging the DOJ’s claims, Google asserts that any deals that it entered into are analogous to those a popular cereal brand would enter into for preferential aisle placement.[8]

Continue reading

Section 230 and Radicalization Scapegoating

Section 230 and Radicalization Scapegoating

By Hannah G. Babinski, Class of 2024

Standing as one of the few provisions of the Communications Decency Act of 1996 yet to be invalidated by the Court as unconstitutional, 47 U.S.C. § 230 (“Section 230”) has repeatedly been at the center of controversy since its enactment. As the modern world continues to become further dependent on online, electronic communication, such controversy is likely to only grow. Section 230 insulates interactive computer services—think social media websites, chat-boards, and any other website that enables a third-party user of the website to upload a post, text, video, or other medium of expression—from liability stemming from content uploaded to the website by third-party users, even where interactive computer services engage in good-faith content moderation. In this regard, the provision effectively serves to classify the third parties, and not the host website, as the speakers or publishers of content.

Though Section 230 has been instrumental in the development of the internet at large, by preventing needless and substantial litigation and establishing a sense of accountability for individual users in tort generally, the limited language of Section 230 has resulted in several issues of interpretation concerning the line between what actions, specifically content moderation, constitute speech on behalf of the interactive computer service provider and what actions do not. Over the course of the last five years, courts have examined in particular whether algorithms created by and incorporated into the host websites are speech and, thus, unprotected by Section 230.

In Force v. Facebook, Inc., the Court of Appeals for the Second Circuit addressed the question of algorithms as speech in the context a Facebook algorithm that directed radicalized content and other pages openly maintained and associated with the terrorist organization Hamas, a Palestinian radical Islamist organization, to the personalized newsfeeds of several individuals, who then went on to attack five Americans in Israel between 2014 and 2016.[1]

Though the majority opinion ultimately concluded that the algorithm was protected by Section 230 immunity, Chief Judge Katzmann dissented with a well-written and thorough argument against applying Section 230 immunity to such a case. Though I reserve my opinion concerning whether I necessarily agree or disagree with the dissent in Force v. Facebook, Inc., Katzmann verbalizes the key concern with Section 230 as it applies to social media as a whole, stating:

By surfacing ideas that were previously deemed too radical to take seriously, social media mainstreams them, which studies show makes people “much more open” to those concepts. . . . The sites are not entirely to blame, of course—they would not have such success without humans willing to generate and to view extreme content. Providers are also tweaking the algorithms to reduce their pull toward hate speech and other inflammatory material. . . . Yet the dangers of social media, in its current form, are palpable.[2]

This statement goes to the heart of the controversy surrounding not only algorithms, but exposure to harmful or radicalizing content on the internet generally, which is exacerbated by the advent and use of social media platforms; with the expansive and uninhibited nature of the internet ecosystem and social media websites enabling and even facilitating the connection between certain individuals with a proclivity for indoctrination and individuals disseminating radicalized content absent the traditional restrictions of time, language or national borders, it is only natural that greater radicalization has resulted. Does this mean that we, as a society, should hinder communication in order to prevent radicalization?

Proponents of dismantling Section 230 and casting the onus on interactive computer service providers to engage in more rigorous substantive moderation efforts would answer that question in the affirmative. However, rather than waging war on the proverbial middleman and laying blame on communication outlets, we should instead concentrate our efforts on the question, acknowledged by Katzmann, of why humans seem more willing to generate and consume extremist content in the modern age. We, as a society, should take responsibility for the increase in radicalized content and vulnerabilities that are resulting in higher individual susceptibility to radicalization, tackling what inspires the speaker as opposed to the tool of speech.

According to findings of the Central Intelligence Agency (“CIA”) and affirmed by the Federal Bureau of Investigation (“FBI”), certain vulnerabilities are almost always present in any violent extremist, regardless of ideology or affiliation; these vulnerabilities include “feeling alone or lacking meaning and purpose in life, being emotionally upset after a stressful event, disagreeing with government policy, not feeling valued or appreciated by society, believing they have limited chances to succeed, [and] feeling hatred toward certain types of people.”[3] As these vulnerabilities are perpetuated by repeated societal and social failures, the number of susceptible individuals will continue to climb.

What’s more, these predispositions are not novel to the age of social media. Undoubtedly, throughout history, we have seen the proliferation of dangerous cults and ideological organizations that radicalize traditional beliefs, targeting the dejected and the isolated in society. For example, political organizations like the National Socialist German Workers’ Party, more infamously known as the NAZI party; Christianity-based cults and hate organizations like the People’s Temple, Children of God, Branch Davidians, and the Klu Klux Klan; and Buddhist-inspired terrorism groups like Aum Shinrikyo have four things in common: 1) they radicalized impressionable individuals, many of whom experienced some of the vulnerabilities cited above, 2) they brought abuse/harm/death to members, 3) they facilitated and encouraged abuse/harm/death to nonmembers, and 4) they reached popularity and obtained their initial members without the help of algorithmic recommendations and social media exposure.

The point is that social media is not to blame for radicalization. Facebook and YouTube’s code-based algorithms that serve to connect individuals with similar interests on social networking sites or organize content based on individualized past video consumption are not to blame for terrorism. We are.

[1] Force v. Facebook, Inc., 934 F. 3d 53 (2d Cir. 2019).

[2] Id.

[3] Cathy Cassasta, Why Do People Become Extremists?, Healthline (updated Sept. 18, 2017), https://www.healthline.com/health-news/why-do-people-become-extremists (last visited Feb. 26, 2023).

 

Revenge Porn: The Result of a Lack of Privacy in an Internet-Based Society

Comment

By Shelbie Marie Mora, Class of 2023

I. Introduction

 Nonconsensual pornography, also referred to as revenge porn, is “the distribution of sexual or pornographic images of individuals without their consent.”[1] Forty-six U.S. states, the District of Columbia, and the U.S. territory of Puerto Rico have adopted revenge porn laws. However, there is no federal law in place that prohibits revenge porn. Several countries around the world have chosen to adopt revenge porn statutes to protect individuals’ privacy rights and prevent emotional and financial harm. Revenge porn is primarily a large issue for women given that they are overwhelmingly the target of it.[2] Major ramifications can amount to victims who have had their intimate images posted online without their consent.

In this paper, I will discuss the rise of revenge porn websites, examine Texas and Vermont’s revenge porn statutes, review case law for each state, and analyze the detriments that the holdings pose to victims of revenge porn. I will next examine Australia, Puerto Rico, and Canada’s revenge porn laws and the penalties imposed for offenders. Lastly, I will assess a failed proposed federal revenge porn law in the United States, discuss where the U.S. falls short on federal legislation, and propose remedies to help protect the privacy of individuals. The United States falls short in revenge porn legislation and must pass a federal law to promote and protect the privacy of Americans and deter this crime.

Continue reading