Privacy Needs Security, Security Needs Privacy

Privacy Needs Security, Security Needs Privacy 

William O’Reilly

 

     I.         Introduction

Security Operations Centers (SOC) for enterprises across the country are in need of professionals. They need professionals to fill the roles that already exist, and they need to add roles to deal with the changing regulatory landscape. For an enterprise, the best practice is an investment in “people, process, and technology.[1] It is true that people are the most expensive part of an SOC.[2] However, the reason there is a shortage is not because enterprises around the US are skimping on their labor. There simply are not enough trained professionals. The training to be a cybersecurity professional is not easy, nor is it cheap. Enterprises are in danger from their absence of professionals, and it may be worth it for them to shoulder the cost of education and certification in pursuit of their goal of self-preservation. One cost the enterprise will have to face in hiring professionals is the establishment of career potential and pay There is also an ongoing cost for organizations that need to have instances of training to level up their employees over time.[4] Training also assists with retention of personnel, making it a necessary cost to the enterprise.[5] Finally, burgeoning privacy laws create burdens and liabilities that the SOC in its present form is only partially equipped to deal with. Fortunately, over 20% percent of enterprises plan to increase their investment in cybersecurity post breach.[6] That investment should include privacy professionals.

Potential employees have costs associated with education and skill development. The cost of training, education, and certifications can be a limit on professionals entering the cybersecurity industry. No SOC will have the same composition or volume, but most SOC services demand certain roles be filled by professionals with specific training. Legislation is also demanding those roles be filled.[7] Each of these professions has specific responsibilities, which require specific skills, and each of those skills can be represented through certifications.[8] Each of these certifications has a cost. Laying out this cost may illustrate one reason for the dearth in skilled professionals and may show an enterprise the value that a professional expects to get out of their investment.

Continue reading

State Data Privacy & Security Law as a Tool for Protecting Legal Adult Use Cannabis Consumers and Industry Employees

State Data Privacy & Security Law as a Tool for Protecting Legal Adult Use Cannabis Consumers and Industry Employees

By: Nicole Onderdonk

1. Introduction

The legalization of adult use cannabis[1] at the state level, its continued illegality at the federal level, and the patchwork of privacy regulations in the United States has generated interesting academic and practical questions around data privacy and security.[2]  At risk are the consumers and employees participating in the legal recreational cannabis marketplace— particularly, their personal information.[3]   For these individuals, the risks of unwanted disclosure of their personal information and the potential adverse consequences associated with their participation in the industry varies significantly depending on which state an individual is located in.[4]  Further, while these are distinct risks, the unwanted disclosure of personal information held by cannabis market participants may significantly increase the degree and likelihood of an individual experiencing adverse employment-related consequences due to recreational cannabis use.  Therefore, data privacy and security laws can and should be deployed by states as a tool to not only protect legal adult use cannabis consumers’ and employees’ personal information, but also their interests and rights more broadly related to their participation in the legal cannabis market.

Privacy law and cannabis law are both arenas where states are actively engaged in their roles in the federalist system as “laboratories of democracy.”[5]  The various state-by-state approaches to protecting consumer and employee data privacy and legalizing recreational cannabis have taken various shapes and forms, akin to other areas of the law where there is an absence or silence at the federal level.  This divergence may create problems and concerns,[6] but it also may reveal novel solutions.  Regarding the personal data of recreational cannabis consumers and industry employees, the strongest solution that emerges from an analysis of the current state-by-state legal framework is a hybrid one—taking the most successful aspects from each state’s experimentation and deploying it to protect legal adult use cannabis market participants from collateral adverse consequences.

Continue reading

The Application of Information Privacy Frameworks in Cybersecurity

The Application of Information Privacy Frameworks in Cybersecurity

By Dale Dunn

PDF LINK

INTRODUCTION

The frequency of cyberattacks is increasing exponentially, with human-driven ransomware attacks more than doubling in number between September 2022 and June 2023 alone.[1] In a vast majority of attacks, threat actors seek to penetrate legitimate accounts of their target’s employees or the accounts of their target’s third-party service provider’s employees.[2] In the remaining instances, threat actors exploit existing vulnerabilities to penetrate their target’s systems.[3] Combatting these attacks requires a holistic, whole-of-society approach.

Current technology and security norms leave room for improvement. The Cybersecurity and Infrastructure Security Agency (CISA) describes current technology products as generally being vulnerable by design (“VbD”).[4] To help companies produce secure products instead, CISA, in combination with its partners, has proposed the Secure by Design (“SBD”) framework.[5] However, SBD will not be sufficient on its own to prevent threat actors from succeeding. The quantity and availability of personal information available today enables threat actors to efficiently bypass security measures.

The Fair Information Practice Principles (“FIPPs”) and the Privacy by Design (“PBD”) framework should be implemented in addition to SBD to reduce both the likelihood and the potential harm of successful cybersecurity attacks. The FIPPs are procedures for handling data that mitigate the risk of misuse.[6] PBD is a supplementary method of mitigating the potential harm that can result from data in a system or product.[7] While both the FIPPs and PBD were developed for use with personal information, they can and should apply beyond that specific context as a way of thinking about all data used and protected by information systems.

This paper is arranged in five sections. The first section describes the requirement of reasonable security. The second section then explains the Secure by Design framework. Section three, the FIPPs and PBD. Section four provides a case study in which social engineering is utilized by a threat actor to conduct cyberattacks. Finally, section five recommends measures companies and other organizations should take to implement the SBD, FIPPs, and the PBD. In sum, this paper will show information privacy principles and methodologies that should be implemented to reduce the risk of cybersecurity attacks.

Continue reading

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

PDF Link

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

By Benjamin Clark

Introduction and Overview

Artificial intelligence has captivated the current interest of the general public and academics alike, bringing closer attention to previously unexplored aspects of these algorithms, such as how they have been implemented into critical infrastructure, ways they can be secured through technical defensive measures, and how they can best be regulated to reduce risk of harm. This paper will discuss vulnerabilities common to artificial intelligence systems used in clinical healthcare and how bad actors exploit them before weighing the merits of current regulatory frameworks proposed by the U.S. and other nations for how they address the cybersecurity threats of these systems.

Primarily, artificial intelligence systems used in clinical research and healthcare settings involve either machine learning or deep learning algorithms.[1] Machine learning algorithms automatically learn and improve themselves without needing to be specifically programmed for each intended function. [2] However, these algorithms require that input data be pre-labeled by programmers to train algorithms to associate input features and best predict the labels for output, which involves some degree of human intervention.[3] The presence of humans in this process is referred to as “supervised machine learning” and is most often observed in systems used for diagnostics and medical imaging, in which physicians set markers for specific diagnoses as the labels and algorithms are able to categorize an image as a diagnosis based off the image’s characteristics.[4] Similarly, deep learning is a subset of machine learning characterized by its “neural network” structure in which input data is transmitted through an algorithm through input, output, and “hidden” layers to identify patterns in data.[5] Deep learning algorithms differ from those that utilize machine learning in that they require no human intervention after being trained; instead, deep learning algorithms process unlabeled data by determining what input is most important to create its own labels.[6]

Continue reading

The Hidden Kraken: Submarine Internet Cables and Privacy Protections

PDF Link

The Hidden Kraken: Submarine Internet Cables and Privacy Protections

By Christopher Guay

  1. Introduction

Beyond the existential dread associated with the greatest depths of the oceans, there rests one of the most important components to our modern civilization. No, it’s not the eldritch horrors of the deep, it’s instead the backbone of the internet. Underwater sea cables represent over “95 percent” of international communications traffic.[1] Underwater sea cables are key to how our modern internet connects the world. These cables allow communications from one country to reach another. Instead of relying upon satellites or radio technology, there are physical fiberoptic lines which connect landmasses of the world. That is why someone in the United States can access a British or German website without any major difficulty. At its core,  submarine internet cables allow enormous amounts of commerce and communications to occur almost instantaneously.[2] Ultimately, the regulatory structure in the United States offers both significant benefits and significant dangers on the issue of information privacy.

There are two major issues related to submarine internet cables, one being related to government use of data and the other having to do with corporate use of data. On the first issue, the United States has accessed and surveilled these submarine internet cables.[3] On the second issue, in the United States, there does not appear to be any regulations stopping submarine cable operators from monetizing the information that goes through their cables. This results from a lack of a comprehensive set of privacy regulations similar to the General Data Protection Regulation (GDPR) in the European Union[4] or California’s California Consumer Privacy Act (CCPA/CPRA).[5] The lack of comprehensive privacy regulations allow companies and the government to collect vast amounts of data.[6] Advertising is big business, with a lot of money involved.[7] The global digital advertising industry is estimated to have $438 billion in revenue in 2021.[8]

Continue reading

Privacy in Virtual and Augmented Reality

Privacy in Virtual and Augmented Reality

Devin Forbush, Christopher Guay, & Maggie Shields

A. Introduction

            In this paper, we set out the basics of Augmented and Virtual Reality.  First, we discuss how the technology works and how data is collected.  Second, we analyze what privacy issues arise, and specifically comment on the gravity of privacy concerns that are not contemplated by current laws given the velocity and volume of data that is collected with this technology.  Third, the final section of this paper analyzes how to mitigate these privacy concerns and what regulation of this technology would ideally look like.  Through the past decade, the advent of augmented reality (AR), mixed reality (MR), and virtual reality (VR) has ushered in a new era of human-computer interactivity.  Although the functions of each reality platform vary, the “umbrella term” XR will be used interchangeably to address concerns covering all areas of these emerging technologies.[1]  The gaming community might have initially popularized XR, but now, broad industries and economic sectors seek to impose the new technologies in a variety of contexts: education, healthcare, workplace, and even fitness.[2]

B. Augmented and Virtual Reality Background

Augmented Reality is “an interface that layers digital content on a user’s visual plane.”[3]  It works by overlaying certain images and objects within the users’ current environment.[4]  AR uses a digital layering which superimposes images and objects into their real world environment.[5]  Software developers create AR smartphone applications or products to be worn by users, such as headsets or AR glasses.[6]  In contrast, Virtual Reality seeks to immerse users within an “interactive virtual environment.”[7]  VR seeks to transport the user into a completely new digital environment, or reality where users can interact with, move within, and behave as if they would within the real world.[8]  To enter VR, a user wears a head-mounted device (HMD) which displays a “three-dimensional computer-generated environment.”[9]  Within the environment created, the HMD uses a variety of sensors, cameras, and controls to track and provide sights, sounds, and haptic response to a user’s input.[10]  Mixed reality offers a combination of virtual reality and augmented reality.[11]  In function, mixed reality creates virtual objects superimposed in the real world, and behaves as if they were real objects.[12]

Continue reading

Generative AI Algorithms: The Fine Line Between Speech and Section 230 Immunity

Generative AI Algorithms: The Fine Line Between Speech and Section 230 Immunity

 By Hannah G. Babinski

ABSTRACT

Russian-American writer and philosopher Ayn Rand once observed, “No speech is ever considered, but only the speaker. It’s so much easier to pass judgment on a man than on an idea.”[1] But what if the speaker is not a man, woman, or a human at all? Concepts of speech and identities of speakers have been the focal points of various court cases and debates in recent years. The Supreme Court and various district courts have faced complex and first-of-their-kind questions concerning emerging technologies, namely algorithms and recommendations, and contemplated whether their outputs constitute speech on behalf of an Internet service provider (“Internet platform”) that would not be covered by Section 230 of the Communications Decency Act (“Section 230”).  In this piece, I will examine some of the issues arising from the questions posed by Justice Gorsuch in Gonzalez v. Google, LLC, namely whether generative AI algorithms and their relative outputs constitute speech that is not immunized under Section 230. I will provide an overview of the technology behind generative AI algorithms and then examine the statutory language and interpretation of Section 230, applying that language and interpretive case law to generative AI. Finally, I will provide demonstrative comparisons between generative AI technology and human content creation and foundational Copyright Law concepts to illustrate how generative AI technologies and algorithmic outputs are akin to unique, standalone products that extend beyond the protections of Section 230.

 

Continue Reading 

Adding Insult to Injury: How Article III Standing Minimizes Privacy Harms to Victims and Undermines Legislative Authority

Adding Insult to Injury: How Article III Standing Minimizes Privacy Harms to Victims and Undermines Legislative Authority

By Kristin Hebert, Nicole Onderdonk, Mark A. Sayre, and Deirdre Sullivan

ABSTRACT

            Victims of data breaches and other privacy harms have frequently encountered significant challenges when attempting to pursue relief in the federal courts. Under Article III standing doctrine, plaintiffs must be able to show a concrete and imminent risk of injury. This standard has proved especially challenging to victims of privacy harms, for whom the harm may be more difficult to define or may not yet have occurred (for example, in the case of a data breach where the stolen data has not yet been used). The Supreme Court’s recent decision in TransUnion appears on its fact to erect an even higher barrier for victims of privacy harms to seek relief. In this article, the authors provide a background on Article III standing doctrine and its applicability to cases involving privacy harms. Next, the recent TransUnion decision is discussed in detail, along with an overview of the evidence that TransUnion has failed to resolve the ongoing circuit splits in this area. Finally, the authors propose a test from the Second Circuit as a standard that may be able to resolve the ongoing split and support increased access to the courts for the victims of privacy harms.

 

Continue Reading

 

Balanced Scrutiny – The Necessity of Adopting a New Standard to Combat the Rising Harm of Invasive Technology

Balanced Scrutiny – The Necessity of Adopting a New Standard to Combat the Rising Harm of Invasive Technology

By Roosevelt S. Bishop, University of Maine School of Law, Class of 2023

ABSTRACT

The current First Amendment jurisprudence of strict scrutiny is wholly insufficient in fostering a healthy legal landscape regarding the freedom of speech in cyberspace. Technology is outpacing the legislative action to address these increasing harms that are prevalent in a society that practically lives online. Consequently, if we, as a society, are to effectively begin addressing the growing danger of the practically protected “expression” of Privacy Invaders, we need to first explore the possibility of a new tier of scrutiny; we need balance. This blueprint for balanced scrutiny will begin by highlighting the harms suffered unequally through the invasion of Intimate Privacy, a term originally coined by premiere privacy scholar Danielle Keats Citron. It will then touch on the historical standing and flexibility of the First Amendment. After edifying how cyber harassment and the First Amendment intersect, this study will conclude by proposing a new standard of judicial review to be utilized when addressing laws targeting cyber expression.  Continue reading