A.I., Facial Recognition, and the New Frontier of Housing Inequality

A.I., Facial Recognition, and the New Frontier of Housing Inequality

By: Caroline Aiello

 

Introduction

“As soon as Ms. Pondexter-Moore steps outside her home, she knows she is being watched.”[1] Schyla Pondexter-Moore is a D.C. resident, and has been living in public housing for over a decade.[2] In 2022, she sued the D.C. Housing Authority for violating her right to privacy, when they forcibly installed advanced surveillance systems in her neighborhood, denied her access to information about the systems, and jailed her overnight while cameras capable of peering into her living room and bedroom were mounted.[3] Schyla is one of over a million public housing residents in the United States.[4] In order to maintain security at these housing complexes, resource-strapped landlords are adopting “landlord tech” to meet their security obligations.[5] Concern for the safety of public housing residents is legitimate and pressing. However, advanced surveillance systems using new features like artificial intelligence are over-surveilling and under-protecting the people they monitor.[6] As jurisdictions in the U.S. and internationally evaluate these systems, key questions emerge about how to balance technological innovation with fundamental principles of respect, dignity, and equity in housing access.

 

Public Residents and Their Reasonable Expectation of Privacy

In 1937, FDR passed The U.S. Housing Act as part of his New Deal.[7] This Act responded to dire housing conditions and economic insecurity that plagued Americans during the Great Depression.[8] It created public housing agencies (PHAs) responsible for creating and managing local public housing complexes.[9] Currently, around 1.5 million people live in public housing.[10] Although originally populated primarily with white, median income households, most units are occupied now by low income households and people of color. [11]

Security is a compelling reason to monitor public housing. PHAs must maintain plans for safety and crime prevention for inspection by The Department of Housing and Urban Development.[12]  These security plans must involve collaboration with local police departments to effectuate.[13] Despite disproportionate levels of surveillance for public housing residents, they are not afforded the same privacy rights as private residents.

Where privacy expectations and technology intersect, the Supreme Court has limited the legality of long term, highly detailed monitoring.[14] The Court found that the public does not expect police to “secretly monitor and catalogue every single movement of an individual[],” making such a practice violative of an individual’s Fourth Amendment rights. [15] In another case, t

The first kind of accessible surveillance tech consisted mostly of CCTV cameras.[17] These cameras were expensive, conspicuous, and limited in capability for recording and storing video.[18] Cameras now are capable of so much more. They identify and label individuals,[19] decide whether someone’s behavior is “normal,”[20] and alert landlords and security personnel immediately when they detect something suspicious.[21] In private housing, constitutional protections place guardrails around surveillance, but in public housing, residents’ everyday behavior is scrutinized by programs and algorithms who capture and report what is going on around them.

 

How Over-Surveilling Disproportionately Harms Subsets of the Public Housing Population

In her 2023 book, Your Face Belongs to Us, Kashmir Hill popularized a dystopian account of Clearview A.I., a facial recognition tool used by law enforcement agencies and private companies to discover everything about someone with just a snapshot of their face.[23] The analysis relies on a database of billions of images and “faceprints” collected from the internet without the subject’s knowledge or consent.[24] European data protection authorities fined Clearview A.I. tens of millions of euros for their nonconsensual collection and abuse of European citizens’ data.[25] The ACLU sued the company on the same grounds.[26] The Circuit Court in Cooks County subsequently banned Clearview’s practice of “covert” and “surreptitious” monitoring of Illinois residents for five years.[27] Additionally, no law enforcement agency in Illinois is allowed to use Clearview because of the harms the system created.[28]

From 2012 to 2020, Rite Aid Pharmacy deployed FRT in many of their locations to identify potential shoplifters.[29] In 2023, The Federal Trade Commission concluded an enforcement action against the pharmacy for misuse of the technology. Now, Rite-Aid, too, is banned from using their FRT systems for five years.[30] The FTC cited Rite-Aid’s “reckless use of facial surveillance systems” that caused “humiliation . . . other harms, and . . . put consumers’ sensitive information at risk.”[31]

Technologies like ClearView A.I. and Rite-Aid surveillance are cheaper and more accessible than they ever were and have made their way into the landlord technology industry. “Landlord tech” refers to “technical products and platforms that have facilitated the merging of technology and real estate industries in novel ways.”[32] These technologies, like facial recognition and behavioral tracking, exist to monitor residents, and to track and categorize their everyday behavior.

Artificial intelligence systems rely on enormous sets of data to teach them to make decisions. When incomplete, incorrect, or misrepresentative data is used to train a decision maker, it learns to make bad decisions. In the Rite-Aid case, for example, their systems were trained on low quality, biased data, causing thousands of false positives.[33] Most of the time, the false positives were generated in predominantly Black and Asian communities, disproportionately harming those communities.[34]

The FTC is not the only entity that recognizes the risk created by these systems. The Colorado AI Act created a task force to investigate and understand the dangers of FRT and AI.[35] San Francisco banned the use of FRT, with several other cities following suit.[36] In the European Union, FRT is considered “high-risk”, requiring operators to test rigorously for bias and accuracy.[37]

In many more cities, however, public housing residents are left unprotected from constant, biased, and automated surveillance. The Washington D.C. housing complex that Ms. Pondexter-Moore sued operates eighty surveillance cameras.[38] In New York, the state with the highest number of public housing residents, there is approximately one camera installed per nineteen citizens, a higher camera to person ratio the Louvre, Wrigley Field, and Los Angeles Airport.[39] In a particularly concerning case, a housing complex in Rolette, North Dakota has one camera installed for every resident, just shy of the amount of cameras per capita used in Rikers Island.[40]

The proliferation of surveillance technology in public housing represents a concerning shift in how we monitor and control vulnerable populations. While security concerns in public housing are legitimate, the current implementation of “landlord tech” creates a two-tiered system of privacy rights that disproportionately affects low-income residents and people of color. The cases of Clearview AI and Rite-Aid demonstrate how facial recognition and A.I. systems, when improperly deployed, can create more harm than protection. These harms are amplified in public housing contexts where residents have limited ability to opt out or challenge such systems. we must critically examine whether these tools truly serve their stated security purposes or merely extend systems of control over already marginalized communities. The experiences of residents like Ms. Pondexter-Moore highlight the urgent need for balanced approaches that respect both security needs and fundamental privacy rights.

 

References

[1] Complaint at 1, Pondexter-Moore v. D. C.  Hous. Auth., No. 1:22-cv-03706 (D.D.C. Dec. 12. 2022).

[2] Id. at 2.

[3] Id. at 1.

[4] U.S. Dep’t of Hous. & Urb. Dev., Public Housing (PH) Data Dashboard, https://www.hud.gov/program_offices/public_indian_housing/programs/ph/PH_Dashboard (last visited Feb. 14, 2025).

[5] Alicia Frazier, 20 Stats Why Tenant Experience Tech is Top Priority for CRI Now, BuildingEngines

(July 7, 2022). https://www.buildingengines.com/blog/commercial-real-estate-tenant-experience-stats/.

[6] Sarah Miller, Reconceptualizing Public Housing: Not as a Policed Site of Control, but as a System of Support, 28 Geo. J. on Poverty L. & Policy, 95, 110 (2020).

[7] FDR Library, FDR & Housing Legislation, Nat’l Archives,

https://www.fdrlibrary.org/housing (last visited Feb. 12, 2025).

[8] Id.

[9] Nat’l Low Income Hous. Coal., A Brief Historical Overview of Affordable Rental Housing 1 (2015), https://nlihc.org/sites/default/files/Sec1.03_Historical-Overview_2015.pdf.

[10] HUD, supra note 4.

[11] Jennifer Schwartz, HUD Publishes Data on 2021 Housing Credit Tenant Characteristics, NCHSA (Aug. 10, 2023); https://www.ncsha.org/blog/hud-publishes-data-on-2021-housing-credit-tenant-characteristics/?utm_source=chatgpt.com; Terry Gross, A ‘Forgotten History’ of How the U.S. Government Segregated America, NPR (May 3, 2017), https://www.npr.org/2017/05/03/526655831/a-forgotten-history-of-how-the-u-s-government-segregated-america?utm_source=chatgpt.com.

[12] 24 C.F.R. § 903.7(m).

[13] Id.

[14] See United States v. Jones, 565 U.S. 400, 404–405 (2012).

[15] Id. at 430.

[16] United States v. Cuevas-Sanchez, 821 F.2d 248, 251 (5th Cir. 1987).

[17] Bryan Johnston, A Brief History of Surveillance Cameras,  Deep Sentinel (July 18, 2022),  https://www.deepsentinel.com/blogs/home-security/history-of-surveillance-cameras/?srsltid=AfmBOoqBpSXdgV3DJky52z_nliObx6OuS02BlOvJT2gNSbuv-BfMu8E2.

[18] Id.

[19] AWS, What is Facial Recognition?, Amazon, https://aws.amazon.com/what-is/facial-recognition/ (last visited Feb. 14, 2025).

[20]How Viisights Detects Suspicious Activity, Viisights, https://www.viisights.com/products/wise/suspicious-activity/ (last visited Feb. 15, 2025).

[21]AI Analytics, Rhombus , www.rhombus.com/ai-analytics/ (last visited Feb. 14, 2025).

[22] Complaint at 3, Pondexter-Moore v. D. C.  Hous. Auth., No. 1:22-CV-03706 (D.D.C. Dec. 12. 2022).

[23] Your Face Belongs to Us, Kirkus Reviews, https://www.kirkusreviews.com/book-reviews/kashmir-hill/your-face-belongs-to-us/ (last visited Feb. 15, 2025).

[24] Terence Liu, How We Store and Search 30 Billion Faces, Clearview AI (Apr. 18, 2023),  https://www.clearview.ai/post/how-we-store-and-search-30-billion-faces.

[25] The French SA Fines Clearview AI EUR 20 Million, Eur. Data Prot. Auth. (Sept. 3, 2024), https://www.edpb.europa.eu/news/national-news/2022/french-sa-fines-clearview-ai-eur-20-million_en; Dutch Supervisory Authority Imposes a Fine on Clearview Because of Illegal Data Collection for Facial Recognition, Eur. Data Prot. Auth. (Sept. 3, 2024), https://www.edpb.europa.eu/news/national-news/2024/dutch-supervisory-authority-imposes-fine-clearview-because-illegal-data_en.

[26] See generally Complaint, ACLU, et al. v. Clearview A.I., No. 2020CH04353 (Ill. Cir. Ct. 2020).

[27] Consent Order, ACLU, et al. v. Clearview A.I., No. 2020CH04353 (Ill. Cir. Ct. 2020).

[28] Id at 3.

[29] Fed. Trade Comm’n, Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards (Dec. 19, 2023), https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without.

[30] Id.

[31] Id.

[32] Erin McElroy, et al., COVID-19 Crisis Capitalism Comes to Real Estate, Boston Rev. (May 7, 2020), https://www.bostonreview.net/articles/erin-mcelroy-meredith-whittaker-genevieve-fried-covid-19-and-tech/.

[33] Fed. Trade Comm’n, supra note 30.

[34] Id.

[35] Colo. Rev. Stat. § 2-3-1707(3)(j).

[36] Shannon Flynn, 13 Cities Where Police are Banned from Using Facial Recognition Tech, Innovation & Tech Today, https://innotechtoday.com/13-cities-where-police-are-banned-from-using-facial-recognition-tech/ (last visited Feb. 15, 2025).

[37] Eur. Parliament, Regulating facial recognition in the EU 25 (Sept. 2021), https://www.europarl.europa.eu/RegData/etudes/IDAN/2021/698021/EPRS_IDA(2021)698021_EN.pdf.

[38] Pondexter-Moore, Compl. at 2.

[39] Douglas MacMillan, Eyes on the poor: Cameras, facial recognition, watch over public housing, Wash. Post (May 16, 2023), https://www.washingtonpost.com/business/2023/05/16/surveillance-cameras-public-housing/.

[40] Id.

It Will Take A Village To Ensure An Authentic Future For Generation Beta

It Will Take a Village to Ensure an Authentic Future for Generation Beta

By: Susan-Caitlyn Seavey

 

Introduction

One of the many glaring issues that future generations will face is the decline in frequency of in-person human interactions. Today’s technology, especially artificial intelligence (AI) offers unparalleled tools that can be used for the betterment and progression of humanity. For example, new customer service bots, called “conversational agents” are responding to customer inquiries with efficient, personalized and human-like responses, “reshaping how we engage with [ ] companies, [and] creating a world where efficiency meets empathy–or at least an impressively convincing facsimile of it.”[1] AI software is also providing efficiency for individuals through multitasking functions, auto-generated answers to questions, and draft responses to texts and emails, saving the user valuable time. However, this technology can also create unrealistic standards and attractive environments that isolate individuals from their reality. Around the globe, AI technology is becoming more normalized and ubiquitous with software like co-pilot in the workplace and AI robots as companions, friends and romantic partners at home. The rapid development is “particularly concerning given its novelness, the speed and autonomy at which the technology can operate, and the frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed.”[2] We face the challenge of balancing the benefits of efficiency and progression of this technology with the risk of being fully consumed by it, and at the cost of our youngest members of society.

This powerful technology should be used to embrace reality and continue striving for a better world; one that actually exists off of a screen. Jennifer Marsnik summarized this challenge well by contemplating how society can “maintain authenticity, human intelligence and personal connection in a landscape increasingly dominated by algorithms, data and automation.”[3] Young minds are the most susceptible to the unrealistic standards and depictions AI can create. Considering the difficulty even adults can sometimes have when determining whether a visual is real or generated by AI, the young generations with their still-developing minds will evolve in this landscape of not always knowing what is authentic and what is not. If society fails to provide safeguards and implement protections around children and their use of our ever-progressing technology, we could end up with future generations being stuck in a perpetual a cycle of unrealistic expectations and disappointment in the real world, prompting more isolation and leading to the degradation of communities. Preserving authentic relationships and interactions with the real world will require a village: Congress must support new and developing legislation for online safety for children, companies should adopt management frameworks and clearinghouse functions to ensure transparency and accountability in consumer engagement, and parents, teachers, and community leaders must work together to encourage social-emotional learning and in-person interactions in children and teens at home, at school, and in their communities.

Continue reading

Google’s New AI-Powered Customer Service Tools Spark Back-to-Back Class Action Lawsuits

Google’s New AI-Powered Customer Service Tools Spark Back-to-Back Class Action Lawsuits

Zion Mercado 

 

Google recently began rolling out “human-like generative AI powered” customer service tools to help companies enhance their customer service experience.[1] This new service is known as the “Cloud Contact Center AI,” and touts a full package of customer service-based features to help streamline customer service capabilities.[2] Companies who utilize the new service  can create virtual customer service agents, access AI-generated insights providing feedback on customer service interactions, store and manage data on a specialized “Contact Center AI Platform,” and consult with Google’s team of experts on how to improve the AI-integrated systems.[3] However, one key feature that has recently come into controversy is the ability for customers to utilize real-time AI-generated responses to customer inquiries which can then be relayed back to the customer by a live agent.[4] This is known as the “Agent Assist” feature.

Agent Assist operates by “us[ing] machine learning technology to provide suggestions to . . . human agents when they are in a conversation with a customer.”[5] These suggestions are based on the company’s own data and conversations.[6] Functionally, when Agent Assist is in use, there are two parties to the conversation: the live customer service agent, and the customer. The AI program listens in and generates responses in real time for the live customer service agent. Some have argued that this violates California’s wiretapping statute by alleging that the actions of Google’s AI program, which is nothing more than a complex computer program, are attributable to Google itself.[7] Those who have done so have alleged that Google, through its AI-integrated services, has been listening in on people’s conversations without their consent or knowledge.[8]

The wiretapping statute in question is a part of the California Invasion of Privacy Act (“CIPA”), and prohibits the intentional tapping, reading, or any other unauthorized connection, whether physically or otherwise, with any communication being transmitted via line, wire, cable, or instrument without the consent of all parties to the communication.[9] It is also unlawful under the statute to communicate any information so obtained or to aid another in obtaining information via prohibited means.[10]

In 2023, a class action lawsuit was filed against Google on behalf of Verizon customers who alleged that Google “used its Cloud Contact Center AI software as a service to wiretap, eavesdrop on, and record” calls made to Verizon’s customer service center.[11] In the case, District Court Judge Rita F. Lin granted Google’s motion to dismiss on grounds that the relationship between Google and Verizon and the utilization of the Cloud Contact Center AI service fell squarely within the statutory exception to the wiretapping statute.[12] Now, the wiretapping statute does contain an explicit exception for telephone companies and their agents, which is the exception upon which Judge Lin relied; however, that exception is narrowed to such acts that “are for the purpose of construction, maintenance, conduct or operation of the services and facilities of the public utility or telephone company.”[13]

Continue reading

Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Continue reading

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

PDF Link

Addressing the Vectors for Attack on Artificial Intelligence Systems Used in Clinical Healthcare through a Robust Regulatory Framework: A Survey

By Benjamin Clark

Introduction and Overview

Artificial intelligence has captivated the current interest of the general public and academics alike, bringing closer attention to previously unexplored aspects of these algorithms, such as how they have been implemented into critical infrastructure, ways they can be secured through technical defensive measures, and how they can best be regulated to reduce risk of harm. This paper will discuss vulnerabilities common to artificial intelligence systems used in clinical healthcare and how bad actors exploit them before weighing the merits of current regulatory frameworks proposed by the U.S. and other nations for how they address the cybersecurity threats of these systems.

Primarily, artificial intelligence systems used in clinical research and healthcare settings involve either machine learning or deep learning algorithms.[1] Machine learning algorithms automatically learn and improve themselves without needing to be specifically programmed for each intended function. [2] However, these algorithms require that input data be pre-labeled by programmers to train algorithms to associate input features and best predict the labels for output, which involves some degree of human intervention.[3] The presence of humans in this process is referred to as “supervised machine learning” and is most often observed in systems used for diagnostics and medical imaging, in which physicians set markers for specific diagnoses as the labels and algorithms are able to categorize an image as a diagnosis based off the image’s characteristics.[4] Similarly, deep learning is a subset of machine learning characterized by its “neural network” structure in which input data is transmitted through an algorithm through input, output, and “hidden” layers to identify patterns in data.[5] Deep learning algorithms differ from those that utilize machine learning in that they require no human intervention after being trained; instead, deep learning algorithms process unlabeled data by determining what input is most important to create its own labels.[6]

Continue reading

U.S. v. Google LLC: An overview of the landmark antitrust case and its impact on consumer privacy, A.I., and the future of the internet.

U.S. v. Google LLC: An overview of the landmark antitrust case and its impact on consumer privacy, A.I., and the future of the internet.

By William Simpson

 

I. Intro

The ongoing antitrust case against Google alleging anticompetitive conduct relating to the company’s search engine could, in the near term, result in a breakup of the company or, alternatively, indicate that existing antitrust law is ill-suited to engage outsize market shares in the digital economy.[1] On a broader scale, this case could have major effects on consumer privacy, A.I., and the character of the internet going forward. The consequences could be, in a word, enormous.

 

II. Background

 

In October 2020, the Department of Justice (DOJ) filed a complaint against Google, alleging that Google violated the Sherman Antitrust Act[2] when it:

  • Entered into exclusivity agreements that forbid preinstallation of any competing search service;
  • Entered into tying arrangements that force preinstallation of its search applications in prime locations on mobile devices and make them undeletable;
  • Entered into long-term agreements with Apple that require Google to be the default general search engine on Apple’s popular Safari browser and other Apple search tools; and
  • Generally used monopoly profits to buy preferential treatment for its search engine on devices, web browsers, and other search access points, creating a continuous and self-reinforcing cycle of monopolization.[3]

The DOJ’s complaint concludes that such practices harm competition and consumers, inhibiting innovation where new companies cannot “develop, compete, and discipline Google’s behavior.”[4] In particular, the DOJ argues that Google’s conduct injures American consumers who are subject to Google’s “often-controversial privacy practices.”[5]

In response, Google refutes the DOJ’s argument, deeming the lawsuit “deeply flawed.”[6] “People use Google because they choose to,” says a Google spokesperson, “not because they’re forced to or because they can’t find alternatives.”[7] Challenging the DOJ’s claims, Google asserts that any deals that it entered into are analogous to those a popular cereal brand would enter into for preferential aisle placement.[8]

Continue reading

Generative AI Algorithms: The Fine Line Between Speech and Section 230 Immunity

Generative AI Algorithms: The Fine Line Between Speech and Section 230 Immunity

 By Hannah G. Babinski

ABSTRACT

Russian-American writer and philosopher Ayn Rand once observed, “No speech is ever considered, but only the speaker. It’s so much easier to pass judgment on a man than on an idea.”[1] But what if the speaker is not a man, woman, or a human at all? Concepts of speech and identities of speakers have been the focal points of various court cases and debates in recent years. The Supreme Court and various district courts have faced complex and first-of-their-kind questions concerning emerging technologies, namely algorithms and recommendations, and contemplated whether their outputs constitute speech on behalf of an Internet service provider (“Internet platform”) that would not be covered by Section 230 of the Communications Decency Act (“Section 230”).  In this piece, I will examine some of the issues arising from the questions posed by Justice Gorsuch in Gonzalez v. Google, LLC, namely whether generative AI algorithms and their relative outputs constitute speech that is not immunized under Section 230. I will provide an overview of the technology behind generative AI algorithms and then examine the statutory language and interpretation of Section 230, applying that language and interpretive case law to generative AI. Finally, I will provide demonstrative comparisons between generative AI technology and human content creation and foundational Copyright Law concepts to illustrate how generative AI technologies and algorithmic outputs are akin to unique, standalone products that extend beyond the protections of Section 230.

 

Continue Reading