House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking

House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking

William Simpson

 

Introduction

As is the case for many federal agencies,[1] the Department of Housing and Urban Development (HUD) is intent on addressing the risk of algorithmic discrimination within its primary statutory domain—housing. But in the wake of Loper Bright,[2] which overturned Chevron[3] deference, and with it the general acquiescence of federal courts to agency interpretations of relevant statutes, HUD is forced to regulate AI and algorithmic decision-making in the housing context through guidance documents and other soft law mechanisms.[4] Such quasi-regulation impairs the efficacy of civil rights law like the Fair Housing Act[5] (FHA) and subjects marginalized groups to continued, and perhaps increasingly insidious,[6] discrimination. With HUD crippled in terms of effectuating meaningful AI regulation, states like Maine—which remains a Chevron state—must step up within their respective jurisdictions to ensure that algorithmic discrimination is mitigated in the housing sector.

 

A Brief Primer on Chevron and Loper Bright

In 1984, the Supreme Court held that where a “statute is silent or ambiguous with respect to a specific issue . . . a [federal] court may not substitute its own construction of [the statute] for a reasonable interpretation made by the administrator of an agency.”[7] In other words, where an agency interpretation of an ambiguous statute is reasonable, a court must defer to the agency. Proponents of Chevron deference have heralded the opinion for its placement of policy decisions in the hands of expert and politically accountable agencies,[8] whereas detractors deemed it a violation of the separation of powers doctrine.[9] In June 2024, the detractors won out.

Chevron is overruled,” wrote Chief Justice John Roberts.[10] To wit, “courts need not and under the APA may not defer to an agency interpretation of the law simply because a statute is ambiguous.”[11] Roberts rested his opinion on the separation of powers principle,[12] a textualist construction of § 706 of the Administrative Procedure Act,[13] a historical analysis,[14] the insurance of Skidmore deference,[15] and the fact that Chevron was subject to numerous “refinements” over the years.[16]

It goes without saying that this jurisprudential U-turn has profound implications for HUD and the statutes it implements.[17] As a result of Chevron’s demise, “any rulemaking proposed by HUD . . . may be more vulnerable to lawsuits than in years past.”[18] Namely, HUD relies on the FHA to authorize its policies, which “broadly describes . . . prohibited discriminatory conduct,” and which HUD interprets “into enforceable directives to serve Congress’ stated goals.”[19] Without Chevron deference, HUD’s interpretations of the FHA are certain to be questioned, and significant barriers for Americans facing housing discrimination will arise.[20]

 

HUD’s Effort to Combat Algorithmic Discrimination in a Post-Chevron Paradigm

In apparent anticipation of such challenges to its interpretations, HUD has resorted to soft law mechanisms like guidance documents to combat algorithmic discrimination. Importantly, these informal mechanisms do not carry the force of law, and are therefore outside the scope of Chevron deference and unaffected by the Loper Bright decision.[21] Such documents include HUD’s “Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing,”[22] and “Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms.”[23] The former pronouncement examines how housing providers and tenant screening services can evaluate rental applicants in a nondiscriminatory way—including by choosing relevant screening criteria, using accurate records, remaining transparent with applicants and allowing them to challenge decisions, and designing screening models for FHA compliance.[24] Of note, the document confirms that the FHA “applies to housing decisions regardless of what technology is used” and that “[b]oth housing providers and tenant screening companies have a responsibility to avoid using these technologies in a discriminatory manner.”[25]

Alternatively, the latter document “addresses the increasingly common use of automated systems, such as algorithmic processes and Artificial Intelligence . . . to facilitate advertisement targeting and delivery” vis-à-vis housing related transactions.[26] Like tenant screening services, algorithmic targeting and delivery of advertisements “risks violating the [FHA] when used for housing-related ads,” and can implicate both advertisers and ad platforms.[27] For example, liability may arise by using algorithmic tools to “segment and select potential audiences by [protected] category,” “deliver ads only to a specified ‘custom’ audience,” or “decide which ads are actually delivered to which consumers, and at what location, time, and price.” [28]  The document recommends that advertisers use ad platforms that proactively mitigate discriminatory practices and that they “monitor outcomes of ad[] campaigns for housing-related ads.”

Indeed, “[w]hile the guidance represents an important step forward in safeguarding housing rights, it isn’t currently more than a suggestion to housing providers.”[29] Hence the dilemma facing regulators in this post-Chevron paradigm: issue a formal rule that will provide the intended protection but is prone to litigation, or deliver informal pronouncements that remain largely immune to challenge but fail to offer enforceable requirements against harmful practices.[30] As this administrative predicament persists, it is state governments, including Maine, that must fill the resulting void.

Continue reading

Section 230 and Radicalization Scapegoating

Section 230 and Radicalization Scapegoating

By Hannah G. Babinski, Class of 2024

Standing as one of the few provisions of the Communications Decency Act of 1996 yet to be invalidated by the Court as unconstitutional, 47 U.S.C. § 230 (“Section 230”) has repeatedly been at the center of controversy since its enactment. As the modern world continues to become further dependent on online, electronic communication, such controversy is likely to only grow. Section 230 insulates interactive computer services—think social media websites, chat-boards, and any other website that enables a third-party user of the website to upload a post, text, video, or other medium of expression—from liability stemming from content uploaded to the website by third-party users, even where interactive computer services engage in good-faith content moderation. In this regard, the provision effectively serves to classify the third parties, and not the host website, as the speakers or publishers of content.

Though Section 230 has been instrumental in the development of the internet at large, by preventing needless and substantial litigation and establishing a sense of accountability for individual users in tort generally, the limited language of Section 230 has resulted in several issues of interpretation concerning the line between what actions, specifically content moderation, constitute speech on behalf of the interactive computer service provider and what actions do not. Over the course of the last five years, courts have examined in particular whether algorithms created by and incorporated into the host websites are speech and, thus, unprotected by Section 230.

In Force v. Facebook, Inc., the Court of Appeals for the Second Circuit addressed the question of algorithms as speech in the context a Facebook algorithm that directed radicalized content and other pages openly maintained and associated with the terrorist organization Hamas, a Palestinian radical Islamist organization, to the personalized newsfeeds of several individuals, who then went on to attack five Americans in Israel between 2014 and 2016.[1]

Though the majority opinion ultimately concluded that the algorithm was protected by Section 230 immunity, Chief Judge Katzmann dissented with a well-written and thorough argument against applying Section 230 immunity to such a case. Though I reserve my opinion concerning whether I necessarily agree or disagree with the dissent in Force v. Facebook, Inc., Katzmann verbalizes the key concern with Section 230 as it applies to social media as a whole, stating:

By surfacing ideas that were previously deemed too radical to take seriously, social media mainstreams them, which studies show makes people “much more open” to those concepts. . . . The sites are not entirely to blame, of course—they would not have such success without humans willing to generate and to view extreme content. Providers are also tweaking the algorithms to reduce their pull toward hate speech and other inflammatory material. . . . Yet the dangers of social media, in its current form, are palpable.[2]

This statement goes to the heart of the controversy surrounding not only algorithms, but exposure to harmful or radicalizing content on the internet generally, which is exacerbated by the advent and use of social media platforms; with the expansive and uninhibited nature of the internet ecosystem and social media websites enabling and even facilitating the connection between certain individuals with a proclivity for indoctrination and individuals disseminating radicalized content absent the traditional restrictions of time, language or national borders, it is only natural that greater radicalization has resulted. Does this mean that we, as a society, should hinder communication in order to prevent radicalization?

Proponents of dismantling Section 230 and casting the onus on interactive computer service providers to engage in more rigorous substantive moderation efforts would answer that question in the affirmative. However, rather than waging war on the proverbial middleman and laying blame on communication outlets, we should instead concentrate our efforts on the question, acknowledged by Katzmann, of why humans seem more willing to generate and consume extremist content in the modern age. We, as a society, should take responsibility for the increase in radicalized content and vulnerabilities that are resulting in higher individual susceptibility to radicalization, tackling what inspires the speaker as opposed to the tool of speech.

According to findings of the Central Intelligence Agency (“CIA”) and affirmed by the Federal Bureau of Investigation (“FBI”), certain vulnerabilities are almost always present in any violent extremist, regardless of ideology or affiliation; these vulnerabilities include “feeling alone or lacking meaning and purpose in life, being emotionally upset after a stressful event, disagreeing with government policy, not feeling valued or appreciated by society, believing they have limited chances to succeed, [and] feeling hatred toward certain types of people.”[3] As these vulnerabilities are perpetuated by repeated societal and social failures, the number of susceptible individuals will continue to climb.

What’s more, these predispositions are not novel to the age of social media. Undoubtedly, throughout history, we have seen the proliferation of dangerous cults and ideological organizations that radicalize traditional beliefs, targeting the dejected and the isolated in society. For example, political organizations like the National Socialist German Workers’ Party, more infamously known as the NAZI party; Christianity-based cults and hate organizations like the People’s Temple, Children of God, Branch Davidians, and the Klu Klux Klan; and Buddhist-inspired terrorism groups like Aum Shinrikyo have four things in common: 1) they radicalized impressionable individuals, many of whom experienced some of the vulnerabilities cited above, 2) they brought abuse/harm/death to members, 3) they facilitated and encouraged abuse/harm/death to nonmembers, and 4) they reached popularity and obtained their initial members without the help of algorithmic recommendations and social media exposure.

The point is that social media is not to blame for radicalization. Facebook and YouTube’s code-based algorithms that serve to connect individuals with similar interests on social networking sites or organize content based on individualized past video consumption are not to blame for terrorism. We are.

[1] Force v. Facebook, Inc., 934 F. 3d 53 (2d Cir. 2019).

[2] Id.

[3] Cathy Cassasta, Why Do People Become Extremists?, Healthline (updated Sept. 18, 2017), https://www.healthline.com/health-news/why-do-people-become-extremists (last visited Feb. 26, 2023).

 

Life’s Not Fair. Is Life Insurance?

The rapid adoption of artificial intelligence techniques by life insurers poses increased risks of discrimination, and yet, regulators are responding with a potentially unworkable state-by-state patchwork of regulations. Could professional standards provide a faster mechanism for a nationally uniform solution?

By Mark A. Sayre, Class of 2024

Introduction

Among the broad categories of insurance offered in the United States, individual life insurance is unique in a few key respects that make it an attractive candidate for the adoption of artificial intelligence (AI).[1] First, individual life insurance is a voluntary product, meaning that individuals are not required by law to purchase it in any scenario.[2] As a result, in order to attract policyholders, life insurers must convince customers not only to choose their company over other companies but also convince customers to choose their product over other products that might compete for a share of discretionary income (such as the newest gadget or a family vacation). Life insurers can, and do, argue that these competitive pressures provide natural constraints on the industry’s use of practices that the public might view as burdensome, unfair or unethical and that such constraints reduce the need for heavy-handed regulation.[3]

Continue reading