House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking

House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking

William Simpson

 

Introduction

As is the case for many federal agencies,[1] the Department of Housing and Urban Development (HUD) is intent on addressing the risk of algorithmic discrimination within its primary statutory domain—housing. But in the wake of Loper Bright,[2] which overturned Chevron[3] deference, and with it the general acquiescence of federal courts to agency interpretations of relevant statutes, HUD is forced to regulate AI and algorithmic decision-making in the housing context through guidance documents and other soft law mechanisms.[4] Such quasi-regulation impairs the efficacy of civil rights law like the Fair Housing Act[5] (FHA) and subjects marginalized groups to continued, and perhaps increasingly insidious,[6] discrimination. With HUD crippled in terms of effectuating meaningful AI regulation, states like Maine—which remains a Chevron state—must step up within their respective jurisdictions to ensure that algorithmic discrimination is mitigated in the housing sector.

 

A Brief Primer on Chevron and Loper Bright

In 1984, the Supreme Court held that where a “statute is silent or ambiguous with respect to a specific issue . . . a [federal] court may not substitute its own construction of [the statute] for a reasonable interpretation made by the administrator of an agency.”[7] In other words, where an agency interpretation of an ambiguous statute is reasonable, a court must defer to the agency. Proponents of Chevron deference have heralded the opinion for its placement of policy decisions in the hands of expert and politically accountable agencies,[8] whereas detractors deemed it a violation of the separation of powers doctrine.[9] In June 2024, the detractors won out.

Chevron is overruled,” wrote Chief Justice John Roberts.[10] To wit, “courts need not and under the APA may not defer to an agency interpretation of the law simply because a statute is ambiguous.”[11] Roberts rested his opinion on the separation of powers principle,[12] a textualist construction of § 706 of the Administrative Procedure Act,[13] a historical analysis,[14] the insurance of Skidmore deference,[15] and the fact that Chevron was subject to numerous “refinements” over the years.[16]

It goes without saying that this jurisprudential U-turn has profound implications for HUD and the statutes it implements.[17] As a result of Chevron’s demise, “any rulemaking proposed by HUD . . . may be more vulnerable to lawsuits than in years past.”[18] Namely, HUD relies on the FHA to authorize its policies, which “broadly describes . . . prohibited discriminatory conduct,” and which HUD interprets “into enforceable directives to serve Congress’ stated goals.”[19] Without Chevron deference, HUD’s interpretations of the FHA are certain to be questioned, and significant barriers for Americans facing housing discrimination will arise.[20]

 

HUD’s Effort to Combat Algorithmic Discrimination in a Post-Chevron Paradigm

In apparent anticipation of such challenges to its interpretations, HUD has resorted to soft law mechanisms like guidance documents to combat algorithmic discrimination. Importantly, these informal mechanisms do not carry the force of law, and are therefore outside the scope of Chevron deference and unaffected by the Loper Bright decision.[21] Such documents include HUD’s “Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing,”[22] and “Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms.”[23] The former pronouncement examines how housing providers and tenant screening services can evaluate rental applicants in a nondiscriminatory way—including by choosing relevant screening criteria, using accurate records, remaining transparent with applicants and allowing them to challenge decisions, and designing screening models for FHA compliance.[24] Of note, the document confirms that the FHA “applies to housing decisions regardless of what technology is used” and that “[b]oth housing providers and tenant screening companies have a responsibility to avoid using these technologies in a discriminatory manner.”[25]

Alternatively, the latter document “addresses the increasingly common use of automated systems, such as algorithmic processes and Artificial Intelligence . . . to facilitate advertisement targeting and delivery” vis-à-vis housing related transactions.[26] Like tenant screening services, algorithmic targeting and delivery of advertisements “risks violating the [FHA] when used for housing-related ads,” and can implicate both advertisers and ad platforms.[27] For example, liability may arise by using algorithmic tools to “segment and select potential audiences by [protected] category,” “deliver ads only to a specified ‘custom’ audience,” or “decide which ads are actually delivered to which consumers, and at what location, time, and price.” [28]  The document recommends that advertisers use ad platforms that proactively mitigate discriminatory practices and that they “monitor outcomes of ad[] campaigns for housing-related ads.”

Indeed, “[w]hile the guidance represents an important step forward in safeguarding housing rights, it isn’t currently more than a suggestion to housing providers.”[29] Hence the dilemma facing regulators in this post-Chevron paradigm: issue a formal rule that will provide the intended protection but is prone to litigation, or deliver informal pronouncements that remain largely immune to challenge but fail to offer enforceable requirements against harmful practices.[30] As this administrative predicament persists, it is state governments, including Maine, that must fill the resulting void.

Continue reading

Privacy Concerns with Health Care Providers’ Use of Personal Devices for Medical Images

Privacy Concerns with Health Care Providers’ Use of Personal Devices for Medical Images

By: Deirdre Sullivan

Last year I had to go to urgent care for a second degree burn on my chest after spilling boiling hot tea on myself. I was surprised when the provider took a photo of my burn, in a relatively sensitive area, with her own cell phone to upload to the medical file. Seeing my surprise, she assured me that this was through a secure application and the photo of my chest was not actually stored on her phone.

 

The following week, my primary care provider did the same thing to continue tracking the burn’s progress. I also expressed the same concerns, and she went further by showing me that the photo was not stored on her camera roll.

 

While I trusted these two female providers, I was still skeptical and imagined all the ways that this could go wrong for a patient. The practice of using personal devices for imaging is ripe for abuse, and this blog post will explore potential harms to patients as well as liability for health care providers.

 

Patients have a reasonable expectation of privacy in their images not being shared past what is necessary to provide care, and it is without dispute that the practice of using personal devices to photograph patients violates this. There is a tension here between what is best for the privacy interests of the patient being photographed, and the business needs of the healthcare entity in reducing the cost of having devices on hand for providers while also increasing access to devices for taking pictures to document injuries in the medical file or for sharing with other providers for consult.

 

First, there are two different possibilities for how the image could be captured and stored on a provider’s cell phone. The provider could directly take the image without the use of a secure app to store on their phone for purposes of a consult with another provider, or the provider could deceptively take an image under the guise of just using a secure app and then hide it from their patient. This could easily happen by a provider switching between a secure healthcare app and their own camera app to take a photo, and then hiding that from the patient by showing them the last photo from an album, rather than the last photo of their camera roll. Or even a provider taking a screenshot of a sensitive photo on the secure app.

 

In either scenario it would be extremely difficult for the patient to catch the violation of their privacy. Most often these photos are not of faces, making it difficult to identify and track once the photo makes it off the provider’s phone either by intentional sharing, or the phone being stolen or hacked. Further, patients are at a disadvantage and may not know to worry about improper photos being taken or that sensitive photos are stored on their provider’s phone and distributed to other persons.

Continue reading

A Balancing Act: The State of Free Speech on Social Media for Public Officials

A Balancing Act: The State of Free Speech on Social Media for Public Officials 

By: Raaid Bakridi

1. Introduction

Blocking someone on social media often seems inconsequential since it’s a digital medium and people do it every day.[1] However, the U.S. Supreme Court has an alternative view, especially when the person who commits the act is a public official. The Court held that, in some instances, public officials can be liable for First Amendment violations when they block anyone from their social media page Writing for the majority, Justice Barrett adopted a two-prong test to be used in instances involving public officials and their social media accounts because distinguishing between on- and off-the-job activity is frequently a “difficult [line] to draw”[3] and a “fact-intensive inquiry.”[4] The distinction, according to Justice Barrett, “turns on substance, not labels.”[5] But this isn’t the first time that the Court has been asked to weigh in on social media cases where public officials block their critics, cases which by nature involve possible First Amendment and public forum concerns.

 

2. Background

Former State Assemblyman Dov Hikind filed a lawsuit against Congresswoman Alexandria Ocasio-Cortez for blocking him on Twitter, now known as X. Hikind claimed that the Congresswoman violated his First Amendment rights by blocking him and other individuals critical of her. This raises concerns about politicians’ and public officials’ use of social media and its implications for free speech. Several of the lower courts have dealt with similar social media blocking issues, and each applied different approaches, leading to a split in authority among the Federal Circuit Courts. When confronted with the issue of blocking, the Second, Fourth, Fifth, Sixth, Eighth, and Ninth Circuit Courts have all used variations of two tests: a totality of the circumstances approach or an appearance-focused approach[10]

In 2021, the Supreme Court had to deal with a similar issue that involved the then-sitting President of the United States, Donald Trump. A group of individuals, including the Knight First Amendment Institute, filed a lawsuit against the President,[11]  alleging that their First Amendment rights were violated after they were blocked for criticizing his policies. The District Court agreed,[12] and the Second Circuit upheld the decision.[13] Following this, President Trump appealed to the Supreme Court for a review which was denied.[14] After eleven consecutive conferences on the case, the Court sent it back to the Second Circuit to dismiss as moot.[15]

Although no majority opinion was offered, Justice Thomas wrote a detailed concurrence that essentially “highlights the principal legal difficulty that surrounds digital platforms—namely, that applying old doctrines to new digital platforms is rarely straightforward.”[16] Justice Thomas further noted that the case highlights two important facts: “[t]oday’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors … We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”[17] Justice Thomas then concluded that the Trump case was not the right one to do so[18] and that the Court will have to address constitutional constraints on privately owned digital mediums sooner or later.

Continue reading

Privacy in Virtual and Augmented Reality

Privacy in Virtual and Augmented Reality

Devin Forbush, Christopher Guay, & Maggie Shields

A. Introduction

            In this paper, we set out the basics of Augmented and Virtual Reality.  First, we discuss how the technology works and how data is collected.  Second, we analyze what privacy issues arise, and specifically comment on the gravity of privacy concerns that are not contemplated by current laws given the velocity and volume of data that is collected with this technology.  Third, the final section of this paper analyzes how to mitigate these privacy concerns and what regulation of this technology would ideally look like.  Through the past decade, the advent of augmented reality (AR), mixed reality (MR), and virtual reality (VR) has ushered in a new era of human-computer interactivity.  Although the functions of each reality platform vary, the “umbrella term” XR will be used interchangeably to address concerns covering all areas of these emerging technologies.[1]  The gaming community might have initially popularized XR, but now, broad industries and economic sectors seek to impose the new technologies in a variety of contexts: education, healthcare, workplace, and even fitness.[2]

B. Augmented and Virtual Reality Background

Augmented Reality is “an interface that layers digital content on a user’s visual plane.”[3]  It works by overlaying certain images and objects within the users’ current environment.[4]  AR uses a digital layering which superimposes images and objects into their real world environment.[5]  Software developers create AR smartphone applications or products to be worn by users, such as headsets or AR glasses.[6]  In contrast, Virtual Reality seeks to immerse users within an “interactive virtual environment.”[7]  VR seeks to transport the user into a completely new digital environment, or reality where users can interact with, move within, and behave as if they would within the real world.[8]  To enter VR, a user wears a head-mounted device (HMD) which displays a “three-dimensional computer-generated environment.”[9]  Within the environment created, the HMD uses a variety of sensors, cameras, and controls to track and provide sights, sounds, and haptic response to a user’s input.[10]  Mixed reality offers a combination of virtual reality and augmented reality.[11]  In function, mixed reality creates virtual objects superimposed in the real world, and behaves as if they were real objects.[12]

Continue reading

Blackstone’s Acquisition of Ancestry.com

Blackstone’s Acquisition of Ancestry.com

By Zion Mercado

Blackstone is one of the largest investment firms in the world, boasting over $1 trillion in assets under management.[1] In December of 2020, Blackstone acquired Ancestry.com for a total enterprise value of $4.7 billion.[2] Ancestry is a genealogy service that compiles and stores DNA samples from customers and compares them to the DNA samples of individuals whose lineage can be traced back generations to certain parts of the world.[3] Within Ancestry’s privacy statement, Section 7 states that if Ancestry is acquired or transferred, they may share the personal information of its subscribers with the acquiring entity.[4] This provision was brought into controversy in Bridges v. Blackstone by a pair of plaintiffs representing a putative class consisting of anyone who had their DNA and personal information tested and compiled by Ancestry while residing in the State of Illinois.[5] The suit was brought under the Illinois Genetic Information Privacy Act (“GIPA”) which bars a person or company from “disclos[ing] the identity of any person upon whom a genetic test is performed or the results of a genetic test in a manner that permits identification of the subject of the test” without that person’s permission.[6] In addition to barring disclosure, GIPA may also bar third-party disclosure ,[7] which would then create a cause of action under the act against third parties who compel an entity to disclose genetic information such as the information compiled by Ancestry. In Bridges, it is clear from the opinion that there was virtually no evidence that Blackstone in any way compelled Ancestry to disclose genetic information.[8] However, the language of the statute seems to be unclear as to whether third parties who compel a holder of an individual’s genetic information can be held liable under GIPA. What does seem to be clear from the Seventh Circuit’s reading of the statute is that when an entity acquires another entity that holds sensitive personal information or genetic data, the mere acquisition itself is not proof of compelling disclosure within the meaning of the act.[9]

The exact language of GIPA that pertains to potential third party liability states that “[n]o person may disclose or be compelled to disclose [genetic information].”[10] In Bridges, Blackstone contended that the recipient of protected information could not be held liable under GIPA even if they compelled disclosure.[11] The plaintiffs, in their complaint, could not cite to any conduct on the behalf of Blackstone that would satisfy federal pleading standards for stating a claim that Blackstone compelled Ancestry to disclose information covered under GIPA.[12] This led the judge to disregard the broader issue surrounding GIPA’s language brought upon by Blackstone’s argument that an entity who receives genetic information cannot be held liable even if it compels disclosure of such information.[13] This issue is, in essence, one of statutory interpretation. Blackstone would have courts interpret the language reading “no person may . . . be compelled to disclose” as only granting a cause of action against a defendant who discloses genetic information, but only “because they were ‘compelled’ to do so.”[14] However, such an instance is already covered by the first part of the phrase “no person may disclose.”[15] Notably, the Bridges court did not address Blackstone’s interpretation of the statute since the claim failed on the merits, however, the judge writing the opinion did cite a lack of precedent on the matter.[16] I believe that the Illinois legislature did not intend to write a redundancy into the statute, and a more protective reading of the statute would extend liability to a third party who compels disclosure of genetic information. The very meaning of the word “compel” is “to drive or urge forcefully or irresistibly” or “to cause to do or occur by overwhelming pressure.”[17] This is an act that we as people (and hopefully state legislators as well) would presumedly want to limit, especially when what is being compelled is the disclosure of sensitive information, such as the results of a genetic test and the necessary personal information that accompanies the test. Again, in the plaintiff’s complaint, there was no evidence proffered indicating that Blackstone in any way compelled disclosure of genetic information from Ancestry.[18] However, if a case were to arise where such an occurrence did happen, we should hope that courts do not side with Blackstone’s interpretation. Although I agree with the notion that merely acquiring an entity who holds genetic or other sensitive information should not give rise to liability, and a mere recipient of such information should not be held liable when they do not compel the holder’s disclosure, an entity, especially an acquiring entity, should not be shielded from liability when they seek to pressure an entity into disclosing the personal information of individuals who have not consented to such disclosure.

[1] Blackstone’s Second Quarter 2023 Supplemental Financial Data, Blackstone (Jul. 20, 2023), at 16, https://s23.q4cdn.com/714267708/files/doc_financials/2023/q2/Blackstone2Q23 SupplementalFinancialData.pdf.

[2] Blackstone Completes Acquisition of Ancestry, Leading Online Family History Business, for $4.7 Billion, Blackstone (Dec. 4, 2020), https://www.blackstone.com/news/press/blackstone-completes-acquisition-of-ancestry-leading-online-family-history-business-for-4-7-billion/.

[3] Frequently Asked Questions, Ancestry.com, https://www.ancestry.com/c/dna/ancestry-dna-ethnicity-estimate-update?o_iid=110004&o_lid=110004&o_sch=Web+Property&_gl=1*ot1obs*_up*MQ..&gclid=5aadd61f 926315a4ec29b2e4c0d617e8&gclsrc=3p.ds#accordion-ev4Faq (last visited Sep. 8, 2023).

[4] Privacy Statement, Ancestry.com (Jan. 26, 2023), https://www.ancestry.com/c/legal/privacystatement.

[5] Amended Class Action Complaint at 8, Bridges v. Blackstone, No. 21-cv-1091-DWD, 2022 LEXIS (S.D. Ill. Jul. 8, 2022), 2022 WL 2643968, at 2

[6] Ill. Comp. Stat. Ann. 410/30 (LexisNexis 2022).

[7] Id.

[8] See Bridges, 66 F.4th at 689-90.

[9] Id. (“we cannot plausibly infer that a run-of-the-mill corporate acquisition, without more alleged about that transaction, results in a compulsory disclosure”).

[10] 410/30 (LexisNexis 2022).

[11] Bridges, 66 F.4th at 689.

[12] Id. at 690.

[13] Id. at 689.

[14] Brief of the Defendant-Appellee at 41, Bridges v. Blackstone, 66 F.4th 687 (7th Cir. 2023), (No. 22-2486)

[15] 410/30 (LexisNexis 2022).

[16] Bridges, 66 F.4th  at 689 (Scudder, CJ.) (explaining that “[t]he dearth of Illinois precedent examining GIPA makes this inquiry all the more challenging”).

[17] Compel, Merriam-Webster.com, https://www.merriam-webster.com/dictionary/compel (last visited Sep. 9, 2023).

[18] See supra note 11, at 690.

Adding Insult to Injury: How Article III Standing Minimizes Privacy Harms to Victims and Undermines Legislative Authority

Adding Insult to Injury: How Article III Standing Minimizes Privacy Harms to Victims and Undermines Legislative Authority

By Kristin Hebert, Nicole Onderdonk, Mark A. Sayre, and Deirdre Sullivan

ABSTRACT

            Victims of data breaches and other privacy harms have frequently encountered significant challenges when attempting to pursue relief in the federal courts. Under Article III standing doctrine, plaintiffs must be able to show a concrete and imminent risk of injury. This standard has proved especially challenging to victims of privacy harms, for whom the harm may be more difficult to define or may not yet have occurred (for example, in the case of a data breach where the stolen data has not yet been used). The Supreme Court’s recent decision in TransUnion appears on its fact to erect an even higher barrier for victims of privacy harms to seek relief. In this article, the authors provide a background on Article III standing doctrine and its applicability to cases involving privacy harms. Next, the recent TransUnion decision is discussed in detail, along with an overview of the evidence that TransUnion has failed to resolve the ongoing circuit splits in this area. Finally, the authors propose a test from the Second Circuit as a standard that may be able to resolve the ongoing split and support increased access to the courts for the victims of privacy harms.

 

Continue Reading

 

Implications of New School Surveillance Methods on Student Data Privacy, National Security, Electronic Surveillance, and the Fourth Amendment

Implications of New School Surveillance Methods on Student Data Privacy, National Security, Electronic Surveillance, and the Fourth Amendment

By Amanda Peskin, University of Maryland, Francis King Carey School of Law, Class of 2024

Abstract

Since the Covid-19 pandemic, schools have escalated their use of educational technology to improve students’ in-school and at-home learning. Although educational technology has many educational benefits for students, it has serious implications for students’ data privacy rights. Not only does using technology for educational practices allow schools to surveil their students, but it also avails students to data collection by the educational technology companies. This paper discusses the legal background of surveilling and monitoring student activity, provides the implications surveillance has on technology, equity, and self-expression, and offers several policy-based improvements to better protect students’ data privacy.

Continue reading

“You Have the Right to Remain Silent(?)”: An Analysis of Courts’ Inconsistent Treatment of the Various Means to Unlock Phones in Relation to the Right Against Self-Incrimination

“You Have the Right to Remain Silent(?)”: An Analysis of Courts’ Inconsistent Treatment of the Various Means to Unlock Phones in Relation to the Right Against Self-Incrimination

By Thomas E. DeMarco, University of Maryland Francis King Carey School of Law, Class of 2023[*]

Riley and Carpenter are the most recent examples of the Supreme Court confronting new challenges technology presents to its existing doctrines surrounding privacy issues. But while the majority of decisions focus on Fourth Amendment concerns regarding questions of unreasonable searches, far less attention has been given to Fifth Amendment concerns. Specifically, how does the Fifth Amendment’s protections against self-incrimination translate to a suspect’s right to refuse to unlock their device for law enforcement to search and collect evidence from? Additionally, how do courts distinguish between various forms of unlocking devices, from passcodes to facial scans?

Continue reading

Digitizing the Fourth Amendment: Privacy in the Age of Big Data Policing

Written by Charles E. Volkwein

ABSTRACT

Today’s availability of massive data sets, inexpensive data storage, and sophisticated analytical software has transformed the capabilities of law enforcement and created new forms of “Big Data Policing.” While Big Data Policing may improve the administration of public safety, these methods endanger constitutional protections against warrantless searches and seizures. This Article explores the Fourth Amendment consequences of Big Data Policing in three parts. First, it provides an overview of Fourth Amendment jurisprudence and its evolution in light of new policing technologies. Next, the Article reviews the concept of “Big Data” and examines three forms of Big Data Policing: Predictive Policing Technology (PPT); data collected by third-parties and purchased by law enforcement; and geofence warrants. Finally, the Article concludes with proposed solutions to rebalance the protections afforded by the Fourth Amendment against these new forms of policing.

Continue reading

Say “Bonjour” to New Blanket Privacy Regulations?

The FTC Considers Tightening the Leash on the Commercial Data Free-for-All and Loose Data Security Practices in an Effort to Advance Toward a Framework More Akin to the GDPR

By Hannah Grace Babinski, class of 2024

On August 11, 2022, the Federal Trade Commission (FTC) issued an Advance Notice of Proposed Rulemaking (ANPR) concerning possible rulemaking surrounding “commercial surveillance” and “lax data security practices”[1] and established a public forum date of September 8, 2022.[2] The FTC’s specific objective for issuing this ANPR is to obtain public input concerning “whether [the FTC] should implement new trade regulation rules or other regulatory alternatives concerning the ways in which companies (1) collect, aggregate, protect, use, analyze, and retain consumer data, as well as (2) transfer, share, sell, or otherwise monetize that data in ways that are unfair or deceptive.”[3]

Continue reading