A.I., Facial Recognition, and the New Frontier of Housing Inequality

A.I., Facial Recognition, and the New Frontier of Housing Inequality

By: Caroline Aiello

 

Introduction

“As soon as Ms. Pondexter-Moore steps outside her home, she knows she is being watched.”[1] Schyla Pondexter-Moore is a D.C. resident, and has been living in public housing for over a decade.[2] In 2022, she sued the D.C. Housing Authority for violating her right to privacy, when they forcibly installed advanced surveillance systems in her neighborhood, denied her access to information about the systems, and jailed her overnight while cameras capable of peering into her living room and bedroom were mounted.[3] Schyla is one of over a million public housing residents in the United States.[4] In order to maintain security at these housing complexes, resource-strapped landlords are adopting “landlord tech” to meet their security obligations.[5] Concern for the safety of public housing residents is legitimate and pressing. However, advanced surveillance systems using new features like artificial intelligence are over-surveilling and under-protecting the people they monitor.[6] As jurisdictions in the U.S. and internationally evaluate these systems, key questions emerge about how to balance technological innovation with fundamental principles of respect, dignity, and equity in housing access.

Continue reading

It Will Take A Village To Ensure An Authentic Future For Generation Beta

It Will Take a Village to Ensure an Authentic Future for Generation Beta

By: Susan-Caitlyn Seavey

 

Introduction

One of the many glaring issues that future generations will face is the decline in frequency of in-person human interactions. Today’s technology, especially artificial intelligence (AI) offers unparalleled tools that can be used for the betterment and progression of humanity. For example, new customer service bots, called “conversational agents” are responding to customer inquiries with efficient, personalized and human-like responses, “reshaping how we engage with [ ] companies, [and] creating a world where efficiency meets empathy–or at least an impressively convincing facsimile of it.”[1] AI software is also providing efficiency for individuals through multitasking functions, auto-generated answers to questions, and draft responses to texts and emails, saving the user valuable time. However, this technology can also create unrealistic standards and attractive environments that isolate individuals from their reality. Around the globe, AI technology is becoming more normalized and ubiquitous with software like co-pilot in the workplace and AI robots as companions, friends and romantic partners at home. The rapid development is “particularly concerning given its novelness, the speed and autonomy at which the technology can operate, and the frequent opacity even to developers of AI systems about how inputs and outputs may be used or exposed.”[2] We face the challenge of balancing the benefits of efficiency and progression of this technology with the risk of being fully consumed by it, and at the cost of our youngest members of society.

This powerful technology should be used to embrace reality and continue striving for a better world; one that actually exists off of a screen. Jennifer Marsnik summarized this challenge well by contemplating how society can “maintain authenticity, human intelligence and personal connection in a landscape increasingly dominated by algorithms, data and automation.”[3] Young minds are the most susceptible to the unrealistic standards and depictions AI can create. Considering the difficulty even adults can sometimes have when determining whether a visual is real or generated by AI, the young generations with their still-developing minds will evolve in this landscape of not always knowing what is authentic and what is not. If society fails to provide safeguards and implement protections around children and their use of our ever-progressing technology, we could end up with future generations being stuck in a perpetual a cycle of unrealistic expectations and disappointment in the real world, prompting more isolation and leading to the degradation of communities. Preserving authentic relationships and interactions with the real world will require a village: Congress must support new and developing legislation for online safety for children, companies should adopt management frameworks and clearinghouse functions to ensure transparency and accountability in consumer engagement, and parents, teachers, and community leaders must work together to encourage social-emotional learning and in-person interactions in children and teens at home, at school, and in their communities.

Continue reading

Privacy in Death: Conserving your Power in Legacy

Privacy in Death: Conserving your Power in Legacy

Gabriel Siwady-Kattan

 

Introduction

Throughout our lives, we store everything online. This means that not only can a person keep physical assets in a bank; they can also have digital assets available online for access and distribution. Who should be able to access those assets when we die? The IRS defines a digital asset as “a digital representation of value recorded on a cryptographically secure distributed ledger or similar technology” and names as examples convertible virtual currency and cryptocurrency, stablecoins, and Non-Fungible Tokens (NFTs).[1] The IRS further elaborates that “[i]f a particular asset has characteristics of a digital asset, [then] it’s treated as one for federal income tax purposes.”[2] Beyond digital assets that have a financial component to them, however, are also images, videos, digital documents, and electronically-stored music. These could be held by any person, and in our modern age, most people have an account where their digital information is stored, whether in an Apple, Google, Facebook, or Instagram account. The existence of digital assets has brought many issues, including how to deal with the distribution of digital assets at the time of death.

To deal with this issue, the Uniform Law Commission (ULC) drafted the Uniform Fiduciary Access to Digital Access Act (hereinafter referred to as the Digital Assets Act).[3] This Act essentially treated digital assets as it would any other kind of traditional property a person held at the time of their death.[4] This meant that an executor had near unsupervised power to access, manage, and distribute a decedent’s digital assets.[5] Under the Digital Assets Act, an executor had the same access to digital assets as an owner had at the time of their death.[6]

Naturally, this “open-access approach” could raise personal privacy concerns. What if, in the process of getting a decedent’s affairs in order, an executor came across communications with a third party? What if that communication shed light on an unknown aspect of the deceased’s life? What if that communication was meant to remain confidential? And what about that third party’s identity?

On top of these personal privacy concerns, the Digital Assets Act’s provisions were contrary to some tech companies’ terms of use agreements. For example, tech companies have their own ways of managing the content on their platform, and often control or limit the agency a user or consumer might have over their own communications. To this end, tech companies almost always require users to agree to a terms of use agreement, which typically includes provisions on how and to whom data may be shared.

Continue reading

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Rooting Around in the Dark: Agencies Refusing to Comply with Dent Motions

Emily Burns

 

Introduction 

The Freedom of Information Act (“FOIA”) is the principal mechanism that allows people to request records held by agencies within the Federal government.[1] In the immigration context, a very common type of FOIA record request is for an A-file, which is a record of every interaction between a non-citizen and an immigration related federal agency.[2]

For people in immigration proceedings, obtaining an A-File allows noncitizens and their lawyers to access information crucial to defending against deportation or gaining immigration benefits, such as entry and exit dates from the United States, copies of past applications submitted to Federal agencies, or statements made to U.S. officials.[3] To obtain an A-File, non-citizens must affirmatively request the file through FOIA from an agency such as United States Citizenship and Immigration Services (USCIS) or Immigration and Customs Enforcement (ICE).[4] However, one carve-out to this process exists, available only in the Ninth Circuit: Dent motions.[5] Dent motions exist due to the case of Dent v. Holder, where the Ninth Circuit recognized that the government violated Sazar Dent’s right to Due Process when it required Mr. Dent to request his A-File through FOIA rather than summarily handing the file over to him when requested in a prior court proceeding.[6]

Continue reading

Honesty is the Best (Privacy) Policy: The Importance of Transparency in Disclosing Data Collection for AI Training

Honesty is the Best (Privacy) Policy: The Importance of Transparency in Disclosing Data Collection for AI Training

Alexandra Logan

 

Introduction

This past July, the Federal Trade Commission (“FTC”), Department of Justice, and a number of international antitrust enforcers issued a Joint Statement on Competition in Generative AI Foundation Models and AI Products. The Joint Statement details that “[f]irms that deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy . . . it is important that consumers are informed . . . about when and how an AI application is employed in the products and services they purchase or use.” Alleged unfair and deceptive acts or practices (“UDAP”) can be investigated by the FTC via Section 5 of the FTC Act.[2] Consumers are looking for more ways to limit the ability of companies to collect and use their data for AI training purposes,[3] and companies should be vigilant in ensuring their privacy policies are up to date and thorough. If companies can keep their privacy policies up-to-date, this can help them to avoid making deceptive or misrepresentative claims about the data that they collect or what they do with it. Recently, X and LinkedIn have come under fire by consumers because of the companies’ data collection practices, and their ambiguous representations and omissions about how they use consumer data.

Continue reading

Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability

Profits Over Privacy: A Confirmation of Tech Giants’ Mass Surveillance and a Call for Social Media Accountability

Aysha Vear

 

In an effort to better understand the data collection and use practices of major social media and video streaming services (SMVSSs), the Federal Trade Commission issued orders to file Special Reports under Section 6(b) of the FTC Act[1] to nine companies in 2020.[2] The orders sought to understand how the companies collect, track, and use their consumers’ personal and demographic information; how they handle advertising and targeted advertising; whether they apply algorithms, data analytics, and artificial intelligence (AI) to consumer information; and how their practices impact children and teens.[3] Titled, “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services,” the 2024 report has been four years in the making and a key but unsurprising finding was that the business model of targeted advertising was the catalyst for extensive data gathering and harmful behaviors, and companies failed to protect users, particularly children and teens.[4]

Data Practices and User Rights
Companies involved in the FTC report collected a large amount of data about consumers’ activity on their platforms and also gleaned information about consumers’ activity off of the platforms which exceeded user expectations.[5] The Commission found that a massive amount of data was collected or inferred about users including demographic information, user metrics, or data about their interaction with the network.[6] With respect to specific privacy settings, many companies did not collect any information at all about user changes or updates to their privacy settings on the SMVSSs.[7]

The information came from many places as well. Some information on users collected by the companies was directly input by the SMVSS user themselves when creating a profile; passively gathered from information on or through engagement with the SMVSS; culled from other services provided by company affiliates or other platforms; inferred from algorithms, data analytics, and AI; or from advertising trackers, advertisers, and data brokers. Data collected was used for many different purposes including for targeted advertising, AI, business purposes like optimization and research and development, to enhance and analyze user engagement, and to infer or deduce other information about the user.[8] In addition, most companies deliberately tracked consumer shopping behaviors and interests.[9] Little transparency, if any, was provided on the targeting, optimization, and analysis of user data.

Continue reading

Anderson v. TikTok: a New Challenge to § 230 Immunity

Anderson v. TikTok: a New Challenge to § 230 Immunity

John Blegen

 

In August 2024, the 3rd Circuit overturned a Pennsylvania District court’s decision to grant summary judgment to TikTok, quashing a suit brought by Tawainna Anderson.[1] Anderson sued on behalf of her deceased daughter Nylah, alleging products liability, negligence, and wrongful death claims after the ten-year-old died of self-asphyxiation after watching numerous videos TikTok routed to her for-you page.[2] The videos, created by third-parties and then uploaded to TikTok, encouraged users to choke themselves with “belts, purse strings, or anything similar,” as part of a viral “blackout challenge.”[3] Nylah’s mother found her daughter asphyxiated in the back of a closet after the ten-year-old had tried to recreate one such video.[4]

The District Court for the Eastern District of Pennsylvania originally dismissed Anderson’s complaint on grounds that TikTok was shielded from liability for content created by third parties under § 230 of the Communications Decency Act.[5] But on appeal, the 3rd Circuit rejected this claim, holding that while § 230 may protect social media platforms such as TikTok from suit for content provided by third party users, in this case, it was TikTok’s own algorithm that was the subject of the lawsuit.[6] This follows a recent Supreme Court decision, Moody v. NetChoice, which held that the algorithms of social-media platforms may themselves be “expressive product” protected under the 1st amendment, and therefore, subject to greater legal scrutiny.[7] In the court’s words: “Because the information that forms the basis of Anderson’s lawsuit – TikTok’s recommendations via its FYP algorithm – is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.”[8]

Since the August ruling, commentators have noted how impactful this case could be for internet content regulation and the social-media industry at large.[9] David French, a legal scholar employed at the New York Times, wrote, “Nylah’s case could turn out to be one of the most significant in the history of the internet.”[10] Leah Plunket, another legal scholar, speaking specifically on the impact this ruling will have on companies’ legal counsel: “My best guess is that every platform that uses a recommendation algorithm that could plausibly count as expressive activity . . . woke up in their general counsel’s office and said, ‘Holy Moly.’”[11]

Continue reading

Privacy Needs Security, Security Needs Privacy

Privacy Needs Security, Security Needs Privacy 

William O’Reilly

 

     I.         Introduction

Security Operations Centers (SOC) for enterprises across the country are in need of professionals. They need professionals to fill the roles that already exist, and they need to add roles to deal with the changing regulatory landscape. For an enterprise, the best practice is an investment in “people, process, and technology.[1] It is true that people are the most expensive part of an SOC.[2] However, the reason there is a shortage is not because enterprises around the US are skimping on their labor. There simply are not enough trained professionals. The training to be a cybersecurity professional is not easy, nor is it cheap. Enterprises are in danger from their absence of professionals, and it may be worth it for them to shoulder the cost of education and certification in pursuit of their goal of self-preservation. One cost the enterprise will have to face in hiring professionals is the establishment of career potential and pay There is also an ongoing cost for organizations that need to have instances of training to level up their employees over time.[4] Training also assists with retention of personnel, making it a necessary cost to the enterprise.[5] Finally, burgeoning privacy laws create burdens and liabilities that the SOC in its present form is only partially equipped to deal with. Fortunately, over 20% percent of enterprises plan to increase their investment in cybersecurity post breach.[6] That investment should include privacy professionals.

Potential employees have costs associated with education and skill development. The cost of training, education, and certifications can be a limit on professionals entering the cybersecurity industry. No SOC will have the same composition or volume, but most SOC services demand certain roles be filled by professionals with specific training. Legislation is also demanding those roles be filled.[7] Each of these professions has specific responsibilities, which require specific skills, and each of those skills can be represented through certifications.[8] Each of these certifications has a cost. Laying out this cost may illustrate one reason for the dearth in skilled professionals and may show an enterprise the value that a professional expects to get out of their investment.

Continue reading

Google’s New AI-Powered Customer Service Tools Spark Back-to-Back Class Action Lawsuits

Google’s New AI-Powered Customer Service Tools Spark Back-to-Back Class Action Lawsuits

Zion Mercado 

 

Google recently began rolling out “human-like generative AI powered” customer service tools to help companies enhance their customer service experience.[1] This new service is known as the “Cloud Contact Center AI,” and touts a full package of customer service-based features to help streamline customer service capabilities.[2] Companies who utilize the new service  can create virtual customer service agents, access AI-generated insights providing feedback on customer service interactions, store and manage data on a specialized “Contact Center AI Platform,” and consult with Google’s team of experts on how to improve the AI-integrated systems.[3] However, one key feature that has recently come into controversy is the ability for customers to utilize real-time AI-generated responses to customer inquiries which can then be relayed back to the customer by a live agent.[4] This is known as the “Agent Assist” feature.

Agent Assist operates by “us[ing] machine learning technology to provide suggestions to . . . human agents when they are in a conversation with a customer.”[5] These suggestions are based on the company’s own data and conversations.[6] Functionally, when Agent Assist is in use, there are two parties to the conversation: the live customer service agent, and the customer. The AI program listens in and generates responses in real time for the live customer service agent. Some have argued that this violates California’s wiretapping statute by alleging that the actions of Google’s AI program, which is nothing more than a complex computer program, are attributable to Google itself.[7] Those who have done so have alleged that Google, through its AI-integrated services, has been listening in on people’s conversations without their consent or knowledge.[8]

The wiretapping statute in question is a part of the California Invasion of Privacy Act (“CIPA”), and prohibits the intentional tapping, reading, or any other unauthorized connection, whether physically or otherwise, with any communication being transmitted via line, wire, cable, or instrument without the consent of all parties to the communication.[9] It is also unlawful under the statute to communicate any information so obtained or to aid another in obtaining information via prohibited means.[10]

In 2023, a class action lawsuit was filed against Google on behalf of Verizon customers who alleged that Google “used its Cloud Contact Center AI software as a service to wiretap, eavesdrop on, and record” calls made to Verizon’s customer service center.[11] In the case, District Court Judge Rita F. Lin granted Google’s motion to dismiss on grounds that the relationship between Google and Verizon and the utilization of the Cloud Contact Center AI service fell squarely within the statutory exception to the wiretapping statute.[12] Now, the wiretapping statute does contain an explicit exception for telephone companies and their agents, which is the exception upon which Judge Lin relied; however, that exception is narrowed to such acts that “are for the purpose of construction, maintenance, conduct or operation of the services and facilities of the public utility or telephone company.”[13]

Continue reading

House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking

House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking

William Simpson

 

Introduction

As is the case for many federal agencies,[1] the Department of Housing and Urban Development (HUD) is intent on addressing the risk of algorithmic discrimination within its primary statutory domain—housing. But in the wake of Loper Bright,[2] which overturned Chevron[3] deference, and with it the general acquiescence of federal courts to agency interpretations of relevant statutes, HUD is forced to regulate AI and algorithmic decision-making in the housing context through guidance documents and other soft law mechanisms.[4] Such quasi-regulation impairs the efficacy of civil rights law like the Fair Housing Act[5] (FHA) and subjects marginalized groups to continued, and perhaps increasingly insidious,[6] discrimination. With HUD crippled in terms of effectuating meaningful AI regulation, states like Maine—which remains a Chevron state—must step up within their respective jurisdictions to ensure that algorithmic discrimination is mitigated in the housing sector.

 

A Brief Primer on Chevron and Loper Bright

In 1984, the Supreme Court held that where a “statute is silent or ambiguous with respect to a specific issue . . . a [federal] court may not substitute its own construction of [the statute] for a reasonable interpretation made by the administrator of an agency.”[7] In other words, where an agency interpretation of an ambiguous statute is reasonable, a court must defer to the agency. Proponents of Chevron deference have heralded the opinion for its placement of policy decisions in the hands of expert and politically accountable agencies,[8] whereas detractors deemed it a violation of the separation of powers doctrine.[9] In June 2024, the detractors won out.

Chevron is overruled,” wrote Chief Justice John Roberts.[10] To wit, “courts need not and under the APA may not defer to an agency interpretation of the law simply because a statute is ambiguous.”[11] Roberts rested his opinion on the separation of powers principle,[12] a textualist construction of § 706 of the Administrative Procedure Act,[13] a historical analysis,[14] the insurance of Skidmore deference,[15] and the fact that Chevron was subject to numerous “refinements” over the years.[16]

It goes without saying that this jurisprudential U-turn has profound implications for HUD and the statutes it implements.[17] As a result of Chevron’s demise, “any rulemaking proposed by HUD . . . may be more vulnerable to lawsuits than in years past.”[18] Namely, HUD relies on the FHA to authorize its policies, which “broadly describes . . . prohibited discriminatory conduct,” and which HUD interprets “into enforceable directives to serve Congress’ stated goals.”[19] Without Chevron deference, HUD’s interpretations of the FHA are certain to be questioned, and significant barriers for Americans facing housing discrimination will arise.[20]

 

HUD’s Effort to Combat Algorithmic Discrimination in a Post-Chevron Paradigm

In apparent anticipation of such challenges to its interpretations, HUD has resorted to soft law mechanisms like guidance documents to combat algorithmic discrimination. Importantly, these informal mechanisms do not carry the force of law, and are therefore outside the scope of Chevron deference and unaffected by the Loper Bright decision.[21] Such documents include HUD’s “Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing,”[22] and “Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms.”[23] The former pronouncement examines how housing providers and tenant screening services can evaluate rental applicants in a nondiscriminatory way—including by choosing relevant screening criteria, using accurate records, remaining transparent with applicants and allowing them to challenge decisions, and designing screening models for FHA compliance.[24] Of note, the document confirms that the FHA “applies to housing decisions regardless of what technology is used” and that “[b]oth housing providers and tenant screening companies have a responsibility to avoid using these technologies in a discriminatory manner.”[25]

Alternatively, the latter document “addresses the increasingly common use of automated systems, such as algorithmic processes and Artificial Intelligence . . . to facilitate advertisement targeting and delivery” vis-à-vis housing related transactions.[26] Like tenant screening services, algorithmic targeting and delivery of advertisements “risks violating the [FHA] when used for housing-related ads,” and can implicate both advertisers and ad platforms.[27] For example, liability may arise by using algorithmic tools to “segment and select potential audiences by [protected] category,” “deliver ads only to a specified ‘custom’ audience,” or “decide which ads are actually delivered to which consumers, and at what location, time, and price.” [28]  The document recommends that advertisers use ad platforms that proactively mitigate discriminatory practices and that they “monitor outcomes of ad[] campaigns for housing-related ads.”

Indeed, “[w]hile the guidance represents an important step forward in safeguarding housing rights, it isn’t currently more than a suggestion to housing providers.”[29] Hence the dilemma facing regulators in this post-Chevron paradigm: issue a formal rule that will provide the intended protection but is prone to litigation, or deliver informal pronouncements that remain largely immune to challenge but fail to offer enforceable requirements against harmful practices.[30] As this administrative predicament persists, it is state governments, including Maine, that must fill the resulting void.

Continue reading