House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking
William Simpson
Introduction
As is the case for many federal agencies,[1] the Department of Housing and Urban Development (HUD) is intent on addressing the risk of algorithmic discrimination within its primary statutory domain—housing. But in the wake of Loper Bright,[2] which overturned Chevron[3] deference, and with it the general acquiescence of federal courts to agency interpretations of relevant statutes, HUD is forced to regulate AI and algorithmic decision-making in the housing context through guidance documents and other soft law mechanisms.[4] Such quasi-regulation impairs the efficacy of civil rights law like the Fair Housing Act[5] (FHA) and subjects marginalized groups to continued, and perhaps increasingly insidious,[6] discrimination. With HUD crippled in terms of effectuating meaningful AI regulation, states like Maine—which remains a Chevron state—must step up within their respective jurisdictions to ensure that algorithmic discrimination is mitigated in the housing sector.
A Brief Primer on Chevron and Loper Bright
In 1984, the Supreme Court held that where a “statute is silent or ambiguous with respect to a specific issue . . . a [federal] court may not substitute its own construction of [the statute] for a reasonable interpretation made by the administrator of an agency.”[7] In other words, where an agency interpretation of an ambiguous statute is reasonable, a court must defer to the agency. Proponents of Chevron deference have heralded the opinion for its placement of policy decisions in the hands of expert and politically accountable agencies,[8] whereas detractors deemed it a violation of the separation of powers doctrine.[9] In June 2024, the detractors won out.
“Chevron is overruled,” wrote Chief Justice John Roberts.[10] To wit, “courts need not and under the APA may not defer to an agency interpretation of the law simply because a statute is ambiguous.”[11] Roberts rested his opinion on the separation of powers principle,[12] a textualist construction of § 706 of the Administrative Procedure Act,[13] a historical analysis,[14] the insurance of Skidmore deference,[15] and the fact that Chevron was subject to numerous “refinements” over the years.[16]
It goes without saying that this jurisprudential U-turn has profound implications for HUD and the statutes it implements.[17] As a result of Chevron’s demise, “any rulemaking proposed by HUD . . . may be more vulnerable to lawsuits than in years past.”[18] Namely, HUD relies on the FHA to authorize its policies, which “broadly describes . . . prohibited discriminatory conduct,” and which HUD interprets “into enforceable directives to serve Congress’ stated goals.”[19] Without Chevron deference, HUD’s interpretations of the FHA are certain to be questioned, and significant barriers for Americans facing housing discrimination will arise.[20]
HUD’s Effort to Combat Algorithmic Discrimination in a Post-Chevron Paradigm
In apparent anticipation of such challenges to its interpretations, HUD has resorted to soft law mechanisms like guidance documents to combat algorithmic discrimination. Importantly, these informal mechanisms do not carry the force of law, and are therefore outside the scope of Chevron deference and unaffected by the Loper Bright decision.[21] Such documents include HUD’s “Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing,”[22] and “Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms.”[23] The former pronouncement examines how housing providers and tenant screening services can evaluate rental applicants in a nondiscriminatory way—including by choosing relevant screening criteria, using accurate records, remaining transparent with applicants and allowing them to challenge decisions, and designing screening models for FHA compliance.[24] Of note, the document confirms that the FHA “applies to housing decisions regardless of what technology is used” and that “[b]oth housing providers and tenant screening companies have a responsibility to avoid using these technologies in a discriminatory manner.”[25]
Alternatively, the latter document “addresses the increasingly common use of automated systems, such as algorithmic processes and Artificial Intelligence . . . to facilitate advertisement targeting and delivery” vis-à-vis housing related transactions.[26] Like tenant screening services, algorithmic targeting and delivery of advertisements “risks violating the [FHA] when used for housing-related ads,” and can implicate both advertisers and ad platforms.[27] For example, liability may arise by using algorithmic tools to “segment and select potential audiences by [protected] category,” “deliver ads only to a specified ‘custom’ audience,” or “decide which ads are actually delivered to which consumers, and at what location, time, and price.” [28] The document recommends that advertisers use ad platforms that proactively mitigate discriminatory practices and that they “monitor outcomes of ad[] campaigns for housing-related ads.”
Indeed, “[w]hile the guidance represents an important step forward in safeguarding housing rights, it isn’t currently more than a suggestion to housing providers.”[29] Hence the dilemma facing regulators in this post-Chevron paradigm: issue a formal rule that will provide the intended protection but is prone to litigation, or deliver informal pronouncements that remain largely immune to challenge but fail to offer enforceable requirements against harmful practices.[30] As this administrative predicament persists, it is state governments, including Maine, that must fill the resulting void.
Continue reading →