State Data Privacy & Security Law as a Tool for Protecting Legal Adult Use Cannabis Consumers and Industry Employees

State Data Privacy & Security Law as a Tool for Protecting Legal Adult Use Cannabis Consumers and Industry Employees

By: Nicole Onderdonk

1. Introduction

The legalization of adult use cannabis[1] at the state level, its continued illegality at the federal level, and the patchwork of privacy regulations in the United States has generated interesting academic and practical questions around data privacy and security.[2]  At risk are the consumers and employees participating in the legal recreational cannabis marketplace— particularly, their personal information.[3]   For these individuals, the risks of unwanted disclosure of their personal information and the potential adverse consequences associated with their participation in the industry varies significantly depending on which state an individual is located in.[4]  Further, while these are distinct risks, the unwanted disclosure of personal information held by cannabis market participants may significantly increase the degree and likelihood of an individual experiencing adverse employment-related consequences due to recreational cannabis use.  Therefore, data privacy and security laws can and should be deployed by states as a tool to not only protect legal adult use cannabis consumers’ and employees’ personal information, but also their interests and rights more broadly related to their participation in the legal cannabis market.

Privacy law and cannabis law are both arenas where states are actively engaged in their roles in the federalist system as “laboratories of democracy.”[5]  The various state-by-state approaches to protecting consumer and employee data privacy and legalizing recreational cannabis have taken various shapes and forms, akin to other areas of the law where there is an absence or silence at the federal level.  This divergence may create problems and concerns,[6] but it also may reveal novel solutions.  Regarding the personal data of recreational cannabis consumers and industry employees, the strongest solution that emerges from an analysis of the current state-by-state legal framework is a hybrid one—taking the most successful aspects from each state’s experimentation and deploying it to protect legal adult use cannabis market participants from collateral adverse consequences.

Continue reading

Privacy Concerns with Health Care Providers’ Use of Personal Devices for Medical Images

Privacy Concerns with Health Care Providers’ Use of Personal Devices for Medical Images

By: Deirdre Sullivan

Last year I had to go to urgent care for a second degree burn on my chest after spilling boiling hot tea on myself. I was surprised when the provider took a photo of my burn, in a relatively sensitive area, with her own cell phone to upload to the medical file. Seeing my surprise, she assured me that this was through a secure application and the photo of my chest was not actually stored on her phone.

 

The following week, my primary care provider did the same thing to continue tracking the burn’s progress. I also expressed the same concerns, and she went further by showing me that the photo was not stored on her camera roll.

 

While I trusted these two female providers, I was still skeptical and imagined all the ways that this could go wrong for a patient. The practice of using personal devices for imaging is ripe for abuse, and this blog post will explore potential harms to patients as well as liability for health care providers.

 

Patients have a reasonable expectation of privacy in their images not being shared past what is necessary to provide care, and it is without dispute that the practice of using personal devices to photograph patients violates this. There is a tension here between what is best for the privacy interests of the patient being photographed, and the business needs of the healthcare entity in reducing the cost of having devices on hand for providers while also increasing access to devices for taking pictures to document injuries in the medical file or for sharing with other providers for consult.

 

First, there are two different possibilities for how the image could be captured and stored on a provider’s cell phone. The provider could directly take the image without the use of a secure app to store on their phone for purposes of a consult with another provider, or the provider could deceptively take an image under the guise of just using a secure app and then hide it from their patient. This could easily happen by a provider switching between a secure healthcare app and their own camera app to take a photo, and then hiding that from the patient by showing them the last photo from an album, rather than the last photo of their camera roll. Or even a provider taking a screenshot of a sensitive photo on the secure app.

 

In either scenario it would be extremely difficult for the patient to catch the violation of their privacy. Most often these photos are not of faces, making it difficult to identify and track once the photo makes it off the provider’s phone either by intentional sharing, or the phone being stolen or hacked. Further, patients are at a disadvantage and may not know to worry about improper photos being taken or that sensitive photos are stored on their provider’s phone and distributed to other persons.

Continue reading

A Balancing Act: The State of Free Speech on Social Media for Public Officials

A Balancing Act: The State of Free Speech on Social Media for Public Officials 

By: Raaid Bakridi

1. Introduction

Blocking someone on social media often seems inconsequential since it’s a digital medium and people do it every day.[1] However, the U.S. Supreme Court has an alternative view, especially when the person who commits the act is a public official. The Court held that, in some instances, public officials can be liable for First Amendment violations when they block anyone from their social media page Writing for the majority, Justice Barrett adopted a two-prong test to be used in instances involving public officials and their social media accounts because distinguishing between on- and off-the-job activity is frequently a “difficult [line] to draw”[3] and a “fact-intensive inquiry.”[4] The distinction, according to Justice Barrett, “turns on substance, not labels.”[5] But this isn’t the first time that the Court has been asked to weigh in on social media cases where public officials block their critics, cases which by nature involve possible First Amendment and public forum concerns.

 

2. Background

Former State Assemblyman Dov Hikind filed a lawsuit against Congresswoman Alexandria Ocasio-Cortez for blocking him on Twitter, now known as X. Hikind claimed that the Congresswoman violated his First Amendment rights by blocking him and other individuals critical of her. This raises concerns about politicians’ and public officials’ use of social media and its implications for free speech. Several of the lower courts have dealt with similar social media blocking issues, and each applied different approaches, leading to a split in authority among the Federal Circuit Courts. When confronted with the issue of blocking, the Second, Fourth, Fifth, Sixth, Eighth, and Ninth Circuit Courts have all used variations of two tests: a totality of the circumstances approach or an appearance-focused approach[10]

In 2021, the Supreme Court had to deal with a similar issue that involved the then-sitting President of the United States, Donald Trump. A group of individuals, including the Knight First Amendment Institute, filed a lawsuit against the President,[11]  alleging that their First Amendment rights were violated after they were blocked for criticizing his policies. The District Court agreed,[12] and the Second Circuit upheld the decision.[13] Following this, President Trump appealed to the Supreme Court for a review which was denied.[14] After eleven consecutive conferences on the case, the Court sent it back to the Second Circuit to dismiss as moot.[15]

Although no majority opinion was offered, Justice Thomas wrote a detailed concurrence that essentially “highlights the principal legal difficulty that surrounds digital platforms—namely, that applying old doctrines to new digital platforms is rarely straightforward.”[16] Justice Thomas further noted that the case highlights two important facts: “[t]oday’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors … We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”[17] Justice Thomas then concluded that the Trump case was not the right one to do so[18] and that the Court will have to address constitutional constraints on privately owned digital mediums sooner or later.

Continue reading

Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Continue reading

The Application of Information Privacy Frameworks in Cybersecurity

The Application of Information Privacy Frameworks in Cybersecurity

By Dale Dunn

PDF LINK

INTRODUCTION

The frequency of cyberattacks is increasing exponentially, with human-driven ransomware attacks more than doubling in number between September 2022 and June 2023 alone.[1] In a vast majority of attacks, threat actors seek to penetrate legitimate accounts of their target’s employees or the accounts of their target’s third-party service provider’s employees.[2] In the remaining instances, threat actors exploit existing vulnerabilities to penetrate their target’s systems.[3] Combatting these attacks requires a holistic, whole-of-society approach.

Current technology and security norms leave room for improvement. The Cybersecurity and Infrastructure Security Agency (CISA) describes current technology products as generally being vulnerable by design (“VbD”).[4] To help companies produce secure products instead, CISA, in combination with its partners, has proposed the Secure by Design (“SBD”) framework.[5] However, SBD will not be sufficient on its own to prevent threat actors from succeeding. The quantity and availability of personal information available today enables threat actors to efficiently bypass security measures.

The Fair Information Practice Principles (“FIPPs”) and the Privacy by Design (“PBD”) framework should be implemented in addition to SBD to reduce both the likelihood and the potential harm of successful cybersecurity attacks. The FIPPs are procedures for handling data that mitigate the risk of misuse.[6] PBD is a supplementary method of mitigating the potential harm that can result from data in a system or product.[7] While both the FIPPs and PBD were developed for use with personal information, they can and should apply beyond that specific context as a way of thinking about all data used and protected by information systems.

This paper is arranged in five sections. The first section describes the requirement of reasonable security. The second section then explains the Secure by Design framework. Section three, the FIPPs and PBD. Section four provides a case study in which social engineering is utilized by a threat actor to conduct cyberattacks. Finally, section five recommends measures companies and other organizations should take to implement the SBD, FIPPs, and the PBD. In sum, this paper will show information privacy principles and methodologies that should be implemented to reduce the risk of cybersecurity attacks.

Continue reading

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

PDF Link

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

By Benjamin Clark

Introduction and Overview

Artificial intelligence has captivated the current interest of the general public and academics alike, bringing closer attention to previously unexplored aspects of these algorithms, such as how they have been implemented into critical infrastructure, ways they can be secured through technical defensive measures, and how they can best be regulated to reduce risk of harm. This paper will discuss vulnerabilities common to artificial intelligence systems used in clinical healthcare and how bad actors exploit them before weighing the merits of current regulatory frameworks proposed by the U.S. and other nations for how they address the cybersecurity threats of these systems.

Primarily, artificial intelligence systems used in clinical research and healthcare settings involve either machine learning or deep learning algorithms.[1] Machine learning algorithms automatically learn and improve themselves without needing to be specifically programmed for each intended function. [2] However, these algorithms require that input data be pre-labeled by programmers to train algorithms to associate input features and best predict the labels for output, which involves some degree of human intervention.[3] The presence of humans in this process is referred to as “supervised machine learning” and is most often observed in systems used for diagnostics and medical imaging, in which physicians set markers for specific diagnoses as the labels and algorithms are able to categorize an image as a diagnosis based off the image’s characteristics.[4] Similarly, deep learning is a subset of machine learning characterized by its “neural network” structure in which input data is transmitted through an algorithm through input, output, and “hidden” layers to identify patterns in data.[5] Deep learning algorithms differ from those that utilize machine learning in that they require no human intervention after being trained; instead, deep learning algorithms process unlabeled data by determining what input is most important to create its own labels.[6]

Continue reading

Surveilled in Broad Daylight: How Electronic Monitoring is Eroding Privacy Rights for Thousands of People in Criminal and Civil Immigration Proceedings

Surveilled in Broad Daylight: How Electronic Monitoring is Eroding Privacy Rights for Thousands of People in Criminal and Civil Immigration Proceedings

By Emily Burns   

What is electronic monitoring

Electronic monitoring is a digital surveillance mechanism that tracks a person’s movements and activities[1] by using radio transmitters, ankle monitors, or cellphone apps.[2] Governmental surveillance through electronic monitoring, used by every state in the U.S. and the Federal Government, functions as a nearly omnipotent presence for people in two particular settings: people in criminal proceedings and/or civil immigration proceedings.[3]

In 2021, approximately 254,700 adults were subject to electronic monitoring in the United States, with 150,700 of them in the criminal system and 103,900 in the civil immigration system.[4] While people outside of these systems hold substantial privacy rights against unreasonable governmental searches and seizures of digital materials through Fourth Amendment jurisprudence, the rise of electronic monitoring forces people to “consent” to electronic monitoring in exchange for the ability to be outside of a jail cell. [5]

Within the criminal context, this means that as a condition of supervision, such as parole or probation, certain defendants must consent to “continuous suspicion-less searches” of their electronics and data such as e-mail, texts, social media, and literally any other information on their devices.[6]

In the civil immigration context, like asylum seekers, immigrants can face a similar “choice:” remain in detention or be released with electronic monitoring.[7]  For immigrants in ICE detention on an immigration bond, this “choice” reads more like a plot device on an episode of Black Mirror than an effect of a chosen DHS policy. While people detained on bond in the criminal system are commonly allowed to be released when they pay at least 10 percent of the bond, ICE requires immigrants to pay the full amount of the bond, which is mandated by statute at a minimum $1,500 with a national average of $9,274.[8] If the bond is not paid, immigrants can spend months or even years in ICE detention.[9] Because many bail bond companies view immigration bonds to hold more risk of non-payment,  companies either charge extremely high interest rates on the bond contracts that immigrants pay or, as in the case of the company Libre by Nexus, ensure the bond by putting an ankle monitor on the bond seeker.[10] For people who must give up their bodily autonomy in order to be released from physical detention by “allowing” a private company to strap an ankle monitor to their body, paying for this indignity comes at a substantial economic cost that many cannot afford: Libre by Nexus charges $420 per month for using the ankle monitor, which is in addition to the actual repayment costs of the bond amount.[11] [12]

Continue reading

Protecting the Biometric Data of Minor Students

Protecting the Biometric Data of Minor Students

by Devin Forbush

 

Introduction

At the beginning of this month, in considering topics to comment on and analyze, a glaring issue so close to home presented itself.  In a letter written on January 24, Jamie Selfridge, Principal of Caribou High School, notified parents and guardians of students of an “exciting new development” to be implemented at the school.[1] What is this exciting new development you may ask? It’s the mass collection of biometric data of their student body.[2] For context, biometric data collection is a process to identify an individual’s biological, physical, or behavioral characteristics.[3] This can include the collection of “fingerprints, facial scans, iris scans, palm prints, and hand geometry.”[4]

Presented to parents as a way to enhance accuracy, streamline processes, improve security, and encourage accountability, the identiMetrics software to be deployed at Caribou High School should not be glanced over lightly.[5]While the information around Caribou high school’s plan was limited at the time, aside from the Maine Wire website post and letter sent out to parents & guardians, a brief scan of the identiMetrics website reveals a cost effective, yet in-depth, data collection software that gathers over 2 million data points on students every day, yet touts safety and security measures are implemented throughout.[6] While this brief post will not analyze the identiMetrics software as a whole, it will rather highlight the legal concerns around biometric data collection and make it clear that the software sought to be implemented by Caribou high school takes an opt-out approach to collection and forfeits students’ privacy and sensitive data for the purpose of educational efficiency.

Immediately, I started writing a brief blog post on this topic, recognizing the deep-seated privacy related issues for minors. Yet, the American Civil Liberties Union of Maine beat me to the punch, and on February 13th, set forth a public record request relating to the collection of biometric data to be conducted at Caribou High School due to their concerns.[7] The next day, Caribou High School signaled their intention to abandon their plan.[8] While I was ecstatic with this news, all the work that had been completed on this blog post appeared moot. Yet, not all was lost, as upon further reflection, this topic signaled important considerations. First, information privacy law and the issues related to it are happening in real-time and are changing day-to-day. Second, this topic presents an opportunity to inform individuals in our small state of the nonexistent protections for the biometric data of minors, and adults alike. Third, this reflection can sets forth proposals that all academic institutions should embrace before they consider collecting highly sensitive information of minor students.

This brief commentary proposes that (1) Academic institutions should not collect the biometric data of their students due to the gaps in legal protection within Federal and State Law; (2) If schools decide to proceed with biometric data collection, they must provide written notice to data subjects, parents, and legal guardians specifying (i) each biometric identifier being collected, (ii) the purpose of collection, (iii) the length of time that data will be used and stored, and (iv) the positive rights that parents, legal guardians, and data subjects maintain (e.g., their right to deletion, withdraw consent, object to processing, portability and access, etc.); and (3) Obtain explicit consent, recorded in written or electronic form, acquired in a free and transparent manner.

Continue reading

The Varying Scope of the Trade Secret Exception

The Varying Scope of the Trade Secret Exception

By William J. O’Reilly

 

Introduction

            Each of the three state data privacy acts taking effect in 2023 carve out an exception for data that can be considered a “trade secret”.[1]> At first blush any exception raises red flags, but this one may have a big enough impact to justify that trepidation. Many businesses could claim that collecting and making inferences about private data is their “trade”, making them exempt from a citizen seeking to exercise their rights. Further, Data Brokers—who should be the most limited by these laws—likely fit neatly into this exception. While the exact scope of the trade secret exception varies by state, past statutes and case law indicate the trade secret exception will fulfil privacy advocates’ fear. However, this can be an opportunity for judiciaries to change and protect citizen rights by interpreting such an exception narrowly, consistent with the respective legislature’s purpose. This narrow interpretation is necessary for the full protection of privacy rights.

Continue reading

The Hidden Kraken: Submarine Internet Cables and Privacy Protections

PDF Link

The Hidden Kraken: Submarine Internet Cables and Privacy Protections

By Christopher Guay

  1. Introduction

Beyond the existential dread associated with the greatest depths of the oceans, there rests one of the most important components to our modern civilization. No, it’s not the eldritch horrors of the deep, it’s instead the backbone of the internet. Underwater sea cables represent over “95 percent” of international communications traffic.[1] Underwater sea cables are key to how our modern internet connects the world. These cables allow communications from one country to reach another. Instead of relying upon satellites or radio technology, there are physical fiberoptic lines which connect landmasses of the world. That is why someone in the United States can access a British or German website without any major difficulty. At its core,  submarine internet cables allow enormous amounts of commerce and communications to occur almost instantaneously.[2] Ultimately, the regulatory structure in the United States offers both significant benefits and significant dangers on the issue of information privacy.

There are two major issues related to submarine internet cables, one being related to government use of data and the other having to do with corporate use of data. On the first issue, the United States has accessed and surveilled these submarine internet cables.[3] On the second issue, in the United States, there does not appear to be any regulations stopping submarine cable operators from monetizing the information that goes through their cables. This results from a lack of a comprehensive set of privacy regulations similar to the General Data Protection Regulation (GDPR) in the European Union[4] or California’s California Consumer Privacy Act (CCPA/CPRA).[5] The lack of comprehensive privacy regulations allow companies and the government to collect vast amounts of data.[6] Advertising is big business, with a lot of money involved.[7] The global digital advertising industry is estimated to have $438 billion in revenue in 2021.[8]

Continue reading