A Balancing Act: The State of Free Speech on Social Media for Public Officials

A Balancing Act: The State of Free Speech on Social Media for Public Officials 

By: Raaid Bakridi

1. Introduction

Blocking someone on social media often seems inconsequential since it’s a digital medium and people do it every day.[1] However, the U.S. Supreme Court has an alternative view, especially when the person who commits the act is a public official. The Court held that, in some instances, public officials can be liable for First Amendment violations when they block anyone from their social media page Writing for the majority, Justice Barrett adopted a two-prong test to be used in instances involving public officials and their social media accounts because distinguishing between on- and off-the-job activity is frequently a “difficult [line] to draw”[3] and a “fact-intensive inquiry.”[4] The distinction, according to Justice Barrett, “turns on substance, not labels.”[5] But this isn’t the first time that the Court has been asked to weigh in on social media cases where public officials block their critics, cases which by nature involve possible First Amendment and public forum concerns.

 

2. Background

Former State Assemblyman Dov Hikind filed a lawsuit against Congresswoman Alexandria Ocasio-Cortez for blocking him on Twitter, now known as X. Hikind claimed that the Congresswoman violated his First Amendment rights by blocking him and other individuals critical of her. This raises concerns about politicians’ and public officials’ use of social media and its implications for free speech. Several of the lower courts have dealt with similar social media blocking issues, and each applied different approaches, leading to a split in authority among the Federal Circuit Courts. When confronted with the issue of blocking, the Second, Fourth, Fifth, Sixth, Eighth, and Ninth Circuit Courts have all used variations of two tests: a totality of the circumstances approach or an appearance-focused approach[10]

In 2021, the Supreme Court had to deal with a similar issue that involved the then-sitting President of the United States, Donald Trump. A group of individuals, including the Knight First Amendment Institute, filed a lawsuit against the President,[11]  alleging that their First Amendment rights were violated after they were blocked for criticizing his policies. The District Court agreed,[12] and the Second Circuit upheld the decision.[13] Following this, President Trump appealed to the Supreme Court for a review which was denied.[14] After eleven consecutive conferences on the case, the Court sent it back to the Second Circuit to dismiss as moot.[15]

Although no majority opinion was offered, Justice Thomas wrote a detailed concurrence that essentially “highlights the principal legal difficulty that surrounds digital platforms—namely, that applying old doctrines to new digital platforms is rarely straightforward.”[16] Justice Thomas further noted that the case highlights two important facts: “[t]oday’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors … We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”[17] Justice Thomas then concluded that the Trump case was not the right one to do so[18] and that the Court will have to address constitutional constraints on privately owned digital mediums sooner or later.

Continue reading

Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Continue reading

The Application of Information Privacy Frameworks in Cybersecurity

The Application of Information Privacy Frameworks in Cybersecurity

By Dale Dunn

PDF LINK

INTRODUCTION

The frequency of cyberattacks is increasing exponentially, with human-driven ransomware attacks more than doubling in number between September 2022 and June 2023 alone.[1] In a vast majority of attacks, threat actors seek to penetrate legitimate accounts of their target’s employees or the accounts of their target’s third-party service provider’s employees.[2] In the remaining instances, threat actors exploit existing vulnerabilities to penetrate their target’s systems.[3] Combatting these attacks requires a holistic, whole-of-society approach.

Current technology and security norms leave room for improvement. The Cybersecurity and Infrastructure Security Agency (CISA) describes current technology products as generally being vulnerable by design (“VbD”).[4] To help companies produce secure products instead, CISA, in combination with its partners, has proposed the Secure by Design (“SBD”) framework.[5] However, SBD will not be sufficient on its own to prevent threat actors from succeeding. The quantity and availability of personal information available today enables threat actors to efficiently bypass security measures.

The Fair Information Practice Principles (“FIPPs”) and the Privacy by Design (“PBD”) framework should be implemented in addition to SBD to reduce both the likelihood and the potential harm of successful cybersecurity attacks. The FIPPs are procedures for handling data that mitigate the risk of misuse.[6] PBD is a supplementary method of mitigating the potential harm that can result from data in a system or product.[7] While both the FIPPs and PBD were developed for use with personal information, they can and should apply beyond that specific context as a way of thinking about all data used and protected by information systems.

This paper is arranged in five sections. The first section describes the requirement of reasonable security. The second section then explains the Secure by Design framework. Section three, the FIPPs and PBD. Section four provides a case study in which social engineering is utilized by a threat actor to conduct cyberattacks. Finally, section five recommends measures companies and other organizations should take to implement the SBD, FIPPs, and the PBD. In sum, this paper will show information privacy principles and methodologies that should be implemented to reduce the risk of cybersecurity attacks.

Continue reading