Google’s New AI-Powered Customer Service Tools Spark Back-to-Back Class Action Lawsuits

Google’s New AI-Powered Customer Service Tools Spark Back-to-Back Class Action Lawsuits

Zion Mercado 

 

Google recently began rolling out “human-like generative AI powered” customer service tools to help companies enhance their customer service experience.[1] This new service is known as the “Cloud Contact Center AI,” and touts a full package of customer service-based features to help streamline customer service capabilities.[2] Companies who utilize the new service  can create virtual customer service agents, access AI-generated insights providing feedback on customer service interactions, store and manage data on a specialized “Contact Center AI Platform,” and consult with Google’s team of experts on how to improve the AI-integrated systems.[3] However, one key feature that has recently come into controversy is the ability for customers to utilize real-time AI-generated responses to customer inquiries which can then be relayed back to the customer by a live agent.[4] This is known as the “Agent Assist” feature.

Agent Assist operates by “us[ing] machine learning technology to provide suggestions to . . . human agents when they are in a conversation with a customer.”[5] These suggestions are based on the company’s own data and conversations.[6] Functionally, when Agent Assist is in use, there are two parties to the conversation: the live customer service agent, and the customer. The AI program listens in and generates responses in real time for the live customer service agent. Some have argued that this violates California’s wiretapping statute by alleging that the actions of Google’s AI program, which is nothing more than a complex computer program, are attributable to Google itself.[7] Those who have done so have alleged that Google, through its AI-integrated services, has been listening in on people’s conversations without their consent or knowledge.[8]

The wiretapping statute in question is a part of the California Invasion of Privacy Act (“CIPA”), and prohibits the intentional tapping, reading, or any other unauthorized connection, whether physically or otherwise, with any communication being transmitted via line, wire, cable, or instrument without the consent of all parties to the communication.[9] It is also unlawful under the statute to communicate any information so obtained or to aid another in obtaining information via prohibited means.[10]

In 2023, a class action lawsuit was filed against Google on behalf of Verizon customers who alleged that Google “used its Cloud Contact Center AI software as a service to wiretap, eavesdrop on, and record” calls made to Verizon’s customer service center.[11] In the case, District Court Judge Rita F. Lin granted Google’s motion to dismiss on grounds that the relationship between Google and Verizon and the utilization of the Cloud Contact Center AI service fell squarely within the statutory exception to the wiretapping statute.[12] Now, the wiretapping statute does contain an explicit exception for telephone companies and their agents, which is the exception upon which Judge Lin relied; however, that exception is narrowed to such acts that “are for the purpose of construction, maintenance, conduct or operation of the services and facilities of the public utility or telephone company.”[13]

Continue reading

House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking

House Rules: Addressing Algorithmic Discrimination in Housing through State-Level Rulemaking

William Simpson

 

Introduction

As is the case for many federal agencies,[1] the Department of Housing and Urban Development (HUD) is intent on addressing the risk of algorithmic discrimination within its primary statutory domain—housing. But in the wake of Loper Bright,[2] which overturned Chevron[3] deference, and with it the general acquiescence of federal courts to agency interpretations of relevant statutes, HUD is forced to regulate AI and algorithmic decision-making in the housing context through guidance documents and other soft law mechanisms.[4] Such quasi-regulation impairs the efficacy of civil rights law like the Fair Housing Act[5] (FHA) and subjects marginalized groups to continued, and perhaps increasingly insidious,[6] discrimination. With HUD crippled in terms of effectuating meaningful AI regulation, states like Maine—which remains a Chevron state—must step up within their respective jurisdictions to ensure that algorithmic discrimination is mitigated in the housing sector.

 

A Brief Primer on Chevron and Loper Bright

In 1984, the Supreme Court held that where a “statute is silent or ambiguous with respect to a specific issue . . . a [federal] court may not substitute its own construction of [the statute] for a reasonable interpretation made by the administrator of an agency.”[7] In other words, where an agency interpretation of an ambiguous statute is reasonable, a court must defer to the agency. Proponents of Chevron deference have heralded the opinion for its placement of policy decisions in the hands of expert and politically accountable agencies,[8] whereas detractors deemed it a violation of the separation of powers doctrine.[9] In June 2024, the detractors won out.

Chevron is overruled,” wrote Chief Justice John Roberts.[10] To wit, “courts need not and under the APA may not defer to an agency interpretation of the law simply because a statute is ambiguous.”[11] Roberts rested his opinion on the separation of powers principle,[12] a textualist construction of § 706 of the Administrative Procedure Act,[13] a historical analysis,[14] the insurance of Skidmore deference,[15] and the fact that Chevron was subject to numerous “refinements” over the years.[16]

It goes without saying that this jurisprudential U-turn has profound implications for HUD and the statutes it implements.[17] As a result of Chevron’s demise, “any rulemaking proposed by HUD . . . may be more vulnerable to lawsuits than in years past.”[18] Namely, HUD relies on the FHA to authorize its policies, which “broadly describes . . . prohibited discriminatory conduct,” and which HUD interprets “into enforceable directives to serve Congress’ stated goals.”[19] Without Chevron deference, HUD’s interpretations of the FHA are certain to be questioned, and significant barriers for Americans facing housing discrimination will arise.[20]

 

HUD’s Effort to Combat Algorithmic Discrimination in a Post-Chevron Paradigm

In apparent anticipation of such challenges to its interpretations, HUD has resorted to soft law mechanisms like guidance documents to combat algorithmic discrimination. Importantly, these informal mechanisms do not carry the force of law, and are therefore outside the scope of Chevron deference and unaffected by the Loper Bright decision.[21] Such documents include HUD’s “Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing,”[22] and “Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms.”[23] The former pronouncement examines how housing providers and tenant screening services can evaluate rental applicants in a nondiscriminatory way—including by choosing relevant screening criteria, using accurate records, remaining transparent with applicants and allowing them to challenge decisions, and designing screening models for FHA compliance.[24] Of note, the document confirms that the FHA “applies to housing decisions regardless of what technology is used” and that “[b]oth housing providers and tenant screening companies have a responsibility to avoid using these technologies in a discriminatory manner.”[25]

Alternatively, the latter document “addresses the increasingly common use of automated systems, such as algorithmic processes and Artificial Intelligence . . . to facilitate advertisement targeting and delivery” vis-à-vis housing related transactions.[26] Like tenant screening services, algorithmic targeting and delivery of advertisements “risks violating the [FHA] when used for housing-related ads,” and can implicate both advertisers and ad platforms.[27] For example, liability may arise by using algorithmic tools to “segment and select potential audiences by [protected] category,” “deliver ads only to a specified ‘custom’ audience,” or “decide which ads are actually delivered to which consumers, and at what location, time, and price.” [28]  The document recommends that advertisers use ad platforms that proactively mitigate discriminatory practices and that they “monitor outcomes of ad[] campaigns for housing-related ads.”

Indeed, “[w]hile the guidance represents an important step forward in safeguarding housing rights, it isn’t currently more than a suggestion to housing providers.”[29] Hence the dilemma facing regulators in this post-Chevron paradigm: issue a formal rule that will provide the intended protection but is prone to litigation, or deliver informal pronouncements that remain largely immune to challenge but fail to offer enforceable requirements against harmful practices.[30] As this administrative predicament persists, it is state governments, including Maine, that must fill the resulting void.

Continue reading

Privacy Concerns with Health Care Providers’ Use of Personal Devices for Medical Images

Privacy Concerns with Health Care Providers’ Use of Personal Devices for Medical Images

By: Deirdre Sullivan

Last year I had to go to urgent care for a second degree burn on my chest after spilling boiling hot tea on myself. I was surprised when the provider took a photo of my burn, in a relatively sensitive area, with her own cell phone to upload to the medical file. Seeing my surprise, she assured me that this was through a secure application and the photo of my chest was not actually stored on her phone.

 

The following week, my primary care provider did the same thing to continue tracking the burn’s progress. I also expressed the same concerns, and she went further by showing me that the photo was not stored on her camera roll.

 

While I trusted these two female providers, I was still skeptical and imagined all the ways that this could go wrong for a patient. The practice of using personal devices for imaging is ripe for abuse, and this blog post will explore potential harms to patients as well as liability for health care providers.

 

Patients have a reasonable expectation of privacy in their images not being shared past what is necessary to provide care, and it is without dispute that the practice of using personal devices to photograph patients violates this. There is a tension here between what is best for the privacy interests of the patient being photographed, and the business needs of the healthcare entity in reducing the cost of having devices on hand for providers while also increasing access to devices for taking pictures to document injuries in the medical file or for sharing with other providers for consult.

 

First, there are two different possibilities for how the image could be captured and stored on a provider’s cell phone. The provider could directly take the image without the use of a secure app to store on their phone for purposes of a consult with another provider, or the provider could deceptively take an image under the guise of just using a secure app and then hide it from their patient. This could easily happen by a provider switching between a secure healthcare app and their own camera app to take a photo, and then hiding that from the patient by showing them the last photo from an album, rather than the last photo of their camera roll. Or even a provider taking a screenshot of a sensitive photo on the secure app.

 

In either scenario it would be extremely difficult for the patient to catch the violation of their privacy. Most often these photos are not of faces, making it difficult to identify and track once the photo makes it off the provider’s phone either by intentional sharing, or the phone being stolen or hacked. Further, patients are at a disadvantage and may not know to worry about improper photos being taken or that sensitive photos are stored on their provider’s phone and distributed to other persons.

Continue reading

A Balancing Act: The State of Free Speech on Social Media for Public Officials

A Balancing Act: The State of Free Speech on Social Media for Public Officials 

By: Raaid Bakridi

1. Introduction

Blocking someone on social media often seems inconsequential since it’s a digital medium and people do it every day.[1] However, the U.S. Supreme Court has an alternative view, especially when the person who commits the act is a public official. The Court held that, in some instances, public officials can be liable for First Amendment violations when they block anyone from their social media page Writing for the majority, Justice Barrett adopted a two-prong test to be used in instances involving public officials and their social media accounts because distinguishing between on- and off-the-job activity is frequently a “difficult [line] to draw”[3] and a “fact-intensive inquiry.”[4] The distinction, according to Justice Barrett, “turns on substance, not labels.”[5] But this isn’t the first time that the Court has been asked to weigh in on social media cases where public officials block their critics, cases which by nature involve possible First Amendment and public forum concerns.

 

2. Background

Former State Assemblyman Dov Hikind filed a lawsuit against Congresswoman Alexandria Ocasio-Cortez for blocking him on Twitter, now known as X. Hikind claimed that the Congresswoman violated his First Amendment rights by blocking him and other individuals critical of her. This raises concerns about politicians’ and public officials’ use of social media and its implications for free speech. Several of the lower courts have dealt with similar social media blocking issues, and each applied different approaches, leading to a split in authority among the Federal Circuit Courts. When confronted with the issue of blocking, the Second, Fourth, Fifth, Sixth, Eighth, and Ninth Circuit Courts have all used variations of two tests: a totality of the circumstances approach or an appearance-focused approach[10]

In 2021, the Supreme Court had to deal with a similar issue that involved the then-sitting President of the United States, Donald Trump. A group of individuals, including the Knight First Amendment Institute, filed a lawsuit against the President,[11]  alleging that their First Amendment rights were violated after they were blocked for criticizing his policies. The District Court agreed,[12] and the Second Circuit upheld the decision.[13] Following this, President Trump appealed to the Supreme Court for a review which was denied.[14] After eleven consecutive conferences on the case, the Court sent it back to the Second Circuit to dismiss as moot.[15]

Although no majority opinion was offered, Justice Thomas wrote a detailed concurrence that essentially “highlights the principal legal difficulty that surrounds digital platforms—namely, that applying old doctrines to new digital platforms is rarely straightforward.”[16] Justice Thomas further noted that the case highlights two important facts: “[t]oday’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors … We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”[17] Justice Thomas then concluded that the Trump case was not the right one to do so[18] and that the Court will have to address constitutional constraints on privately owned digital mediums sooner or later.

Continue reading

Artificial Intelligence Liability

Artificial Intelligence Liability

By Susan-Caitlyn Seavey

1. Who is Responsible for Harm flowing from AI?   

Most people can easily recognize the immense impact technological developments have had in the recent decade, affecting practically every sector. While the laws and regulations governing our society have somewhat lagged behind these technological advances, we have still managed to create a framework that seems to effectively govern these modern tools. With the implementation and widespread usage of AI, our current legal and regulatory parameters do not neatly fit anymore. We are left with questions about who is ultimately responsible for harms that stem from AI. The issue of liability does not likely have a one size fits all solution, and our government and courts are working to understand and produce the new standards and guidelines AI requires. Stanford Law Fellow, Thomas Weber, says it well: “Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.”[1] Until there is substantial court precedent and more promulgated AI laws, scholars and professionals are limited to discussing different theories of liability that may be suitable for AI, such as strict liability and negligence law.

            In 2023, a man in Belgium ended his life after apparently becoming emotionally dependent on an AI-powered chatbot, leaving behind his wife and two children.[2] Also in 2023, Stanford’s Director of Law, Science and Technology, Professor Lemley, asked chatbot GPT-4 to provide information about himself.[3]> The algorithm offered defamatory information, believing Professor Lemley’s research to actually be a misappropriation of trade secrets.[4] In both of these cases, it is unclear who would and/or could be held liable for the death of the father and for the defamatory information. Traditional liability is long-established with laws and regulations in place and ample case law to support the structure we have created for it. However, AI transcends many of the boxes we have fit other technology into, including the liability framework.

For Professor Lemley to establish the requisite elements of a defamation claim, he would have to prove the bad actor’s intent to defame; the standard requires that a reasonable person should have known that the information was false or exhibited a reckless disregard as to the truth or falsity of the published statement.[5] But how does one show that a robot possesses such requisite intent? It would follow that liability may fall to the developers if intent cannot be apportioned to the AI technology at issue. The apparent irrelevance of intent with AI requires an alternative option to account for liability. A guide of best practices may be helpful to direct AI. “Professor Lemley suggests [that by] implementing best practices, companies and developers could shoulder less liability for harms their programs may cause.”[6] While not specifically broken down, this concept is supported by the Cybersecurity and Infrastructure Security Agency’s (CISA) work to develop “best practices and guidance for secure and resilient AI software development and implementation.”[7]

Continue reading

Surveilled in Broad Daylight: How Electronic Monitoring is Eroding Privacy Rights for Thousands of People in Criminal and Civil Immigration Proceedings

Surveilled in Broad Daylight: How Electronic Monitoring is Eroding Privacy Rights for Thousands of People in Criminal and Civil Immigration Proceedings

By Emily Burns   

What is electronic monitoring

Electronic monitoring is a digital surveillance mechanism that tracks a person’s movements and activities[1] by using radio transmitters, ankle monitors, or cellphone apps.[2] Governmental surveillance through electronic monitoring, used by every state in the U.S. and the Federal Government, functions as a nearly omnipotent presence for people in two particular settings: people in criminal proceedings and/or civil immigration proceedings.[3]

In 2021, approximately 254,700 adults were subject to electronic monitoring in the United States, with 150,700 of them in the criminal system and 103,900 in the civil immigration system.[4] While people outside of these systems hold substantial privacy rights against unreasonable governmental searches and seizures of digital materials through Fourth Amendment jurisprudence, the rise of electronic monitoring forces people to “consent” to electronic monitoring in exchange for the ability to be outside of a jail cell. [5]

Within the criminal context, this means that as a condition of supervision, such as parole or probation, certain defendants must consent to “continuous suspicion-less searches” of their electronics and data such as e-mail, texts, social media, and literally any other information on their devices.[6]

In the civil immigration context, like asylum seekers, immigrants can face a similar “choice:” remain in detention or be released with electronic monitoring.[7]  For immigrants in ICE detention on an immigration bond, this “choice” reads more like a plot device on an episode of Black Mirror than an effect of a chosen DHS policy. While people detained on bond in the criminal system are commonly allowed to be released when they pay at least 10 percent of the bond, ICE requires immigrants to pay the full amount of the bond, which is mandated by statute at a minimum $1,500 with a national average of $9,274.[8] If the bond is not paid, immigrants can spend months or even years in ICE detention.[9] Because many bail bond companies view immigration bonds to hold more risk of non-payment,  companies either charge extremely high interest rates on the bond contracts that immigrants pay or, as in the case of the company Libre by Nexus, ensure the bond by putting an ankle monitor on the bond seeker.[10] For people who must give up their bodily autonomy in order to be released from physical detention by “allowing” a private company to strap an ankle monitor to their body, paying for this indignity comes at a substantial economic cost that many cannot afford: Libre by Nexus charges $420 per month for using the ankle monitor, which is in addition to the actual repayment costs of the bond amount.[11] [12]

Continue reading

Protecting the Biometric Data of Minor Students

Protecting the Biometric Data of Minor Students

by Devin Forbush

 

Introduction

At the beginning of this month, in considering topics to comment on and analyze, a glaring issue so close to home presented itself.  In a letter written on January 24, Jamie Selfridge, Principal of Caribou High School, notified parents and guardians of students of an “exciting new development” to be implemented at the school.[1] What is this exciting new development you may ask? It’s the mass collection of biometric data of their student body.[2] For context, biometric data collection is a process to identify an individual’s biological, physical, or behavioral characteristics.[3] This can include the collection of “fingerprints, facial scans, iris scans, palm prints, and hand geometry.”[4]

Presented to parents as a way to enhance accuracy, streamline processes, improve security, and encourage accountability, the identiMetrics software to be deployed at Caribou High School should not be glanced over lightly.[5]While the information around Caribou high school’s plan was limited at the time, aside from the Maine Wire website post and letter sent out to parents & guardians, a brief scan of the identiMetrics website reveals a cost effective, yet in-depth, data collection software that gathers over 2 million data points on students every day, yet touts safety and security measures are implemented throughout.[6] While this brief post will not analyze the identiMetrics software as a whole, it will rather highlight the legal concerns around biometric data collection and make it clear that the software sought to be implemented by Caribou high school takes an opt-out approach to collection and forfeits students’ privacy and sensitive data for the purpose of educational efficiency.

Immediately, I started writing a brief blog post on this topic, recognizing the deep-seated privacy related issues for minors. Yet, the American Civil Liberties Union of Maine beat me to the punch, and on February 13th, set forth a public record request relating to the collection of biometric data to be conducted at Caribou High School due to their concerns.[7] The next day, Caribou High School signaled their intention to abandon their plan.[8] While I was ecstatic with this news, all the work that had been completed on this blog post appeared moot. Yet, not all was lost, as upon further reflection, this topic signaled important considerations. First, information privacy law and the issues related to it are happening in real-time and are changing day-to-day. Second, this topic presents an opportunity to inform individuals in our small state of the nonexistent protections for the biometric data of minors, and adults alike. Third, this reflection can sets forth proposals that all academic institutions should embrace before they consider collecting highly sensitive information of minor students.

This brief commentary proposes that (1) Academic institutions should not collect the biometric data of their students due to the gaps in legal protection within Federal and State Law; (2) If schools decide to proceed with biometric data collection, they must provide written notice to data subjects, parents, and legal guardians specifying (i) each biometric identifier being collected, (ii) the purpose of collection, (iii) the length of time that data will be used and stored, and (iv) the positive rights that parents, legal guardians, and data subjects maintain (e.g., their right to deletion, withdraw consent, object to processing, portability and access, etc.); and (3) Obtain explicit consent, recorded in written or electronic form, acquired in a free and transparent manner.

Continue reading

Blackstone’s Acquisition of Ancestry.com

Blackstone’s Acquisition of Ancestry.com

By Zion Mercado

Blackstone is one of the largest investment firms in the world, boasting over $1 trillion in assets under management.[1] In December of 2020, Blackstone acquired Ancestry.com for a total enterprise value of $4.7 billion.[2] Ancestry is a genealogy service that compiles and stores DNA samples from customers and compares them to the DNA samples of individuals whose lineage can be traced back generations to certain parts of the world.[3] Within Ancestry’s privacy statement, Section 7 states that if Ancestry is acquired or transferred, they may share the personal information of its subscribers with the acquiring entity.[4] This provision was brought into controversy in Bridges v. Blackstone by a pair of plaintiffs representing a putative class consisting of anyone who had their DNA and personal information tested and compiled by Ancestry while residing in the State of Illinois.[5] The suit was brought under the Illinois Genetic Information Privacy Act (“GIPA”) which bars a person or company from “disclos[ing] the identity of any person upon whom a genetic test is performed or the results of a genetic test in a manner that permits identification of the subject of the test” without that person’s permission.[6] In addition to barring disclosure, GIPA may also bar third-party disclosure ,[7] which would then create a cause of action under the act against third parties who compel an entity to disclose genetic information such as the information compiled by Ancestry. In Bridges, it is clear from the opinion that there was virtually no evidence that Blackstone in any way compelled Ancestry to disclose genetic information.[8] However, the language of the statute seems to be unclear as to whether third parties who compel a holder of an individual’s genetic information can be held liable under GIPA. What does seem to be clear from the Seventh Circuit’s reading of the statute is that when an entity acquires another entity that holds sensitive personal information or genetic data, the mere acquisition itself is not proof of compelling disclosure within the meaning of the act.[9]

The exact language of GIPA that pertains to potential third party liability states that “[n]o person may disclose or be compelled to disclose [genetic information].”[10] In Bridges, Blackstone contended that the recipient of protected information could not be held liable under GIPA even if they compelled disclosure.[11] The plaintiffs, in their complaint, could not cite to any conduct on the behalf of Blackstone that would satisfy federal pleading standards for stating a claim that Blackstone compelled Ancestry to disclose information covered under GIPA.[12] This led the judge to disregard the broader issue surrounding GIPA’s language brought upon by Blackstone’s argument that an entity who receives genetic information cannot be held liable even if it compels disclosure of such information.[13] This issue is, in essence, one of statutory interpretation. Blackstone would have courts interpret the language reading “no person may . . . be compelled to disclose” as only granting a cause of action against a defendant who discloses genetic information, but only “because they were ‘compelled’ to do so.”[14] However, such an instance is already covered by the first part of the phrase “no person may disclose.”[15] Notably, the Bridges court did not address Blackstone’s interpretation of the statute since the claim failed on the merits, however, the judge writing the opinion did cite a lack of precedent on the matter.[16] I believe that the Illinois legislature did not intend to write a redundancy into the statute, and a more protective reading of the statute would extend liability to a third party who compels disclosure of genetic information. The very meaning of the word “compel” is “to drive or urge forcefully or irresistibly” or “to cause to do or occur by overwhelming pressure.”[17] This is an act that we as people (and hopefully state legislators as well) would presumedly want to limit, especially when what is being compelled is the disclosure of sensitive information, such as the results of a genetic test and the necessary personal information that accompanies the test. Again, in the plaintiff’s complaint, there was no evidence proffered indicating that Blackstone in any way compelled disclosure of genetic information from Ancestry.[18] However, if a case were to arise where such an occurrence did happen, we should hope that courts do not side with Blackstone’s interpretation. Although I agree with the notion that merely acquiring an entity who holds genetic or other sensitive information should not give rise to liability, and a mere recipient of such information should not be held liable when they do not compel the holder’s disclosure, an entity, especially an acquiring entity, should not be shielded from liability when they seek to pressure an entity into disclosing the personal information of individuals who have not consented to such disclosure.

[1] Blackstone’s Second Quarter 2023 Supplemental Financial Data, Blackstone (Jul. 20, 2023), at 16, https://s23.q4cdn.com/714267708/files/doc_financials/2023/q2/Blackstone2Q23 SupplementalFinancialData.pdf.

[2] Blackstone Completes Acquisition of Ancestry, Leading Online Family History Business, for $4.7 Billion, Blackstone (Dec. 4, 2020), https://www.blackstone.com/news/press/blackstone-completes-acquisition-of-ancestry-leading-online-family-history-business-for-4-7-billion/.

[3] Frequently Asked Questions, Ancestry.com, https://www.ancestry.com/c/dna/ancestry-dna-ethnicity-estimate-update?o_iid=110004&o_lid=110004&o_sch=Web+Property&_gl=1*ot1obs*_up*MQ..&gclid=5aadd61f 926315a4ec29b2e4c0d617e8&gclsrc=3p.ds#accordion-ev4Faq (last visited Sep. 8, 2023).

[4] Privacy Statement, Ancestry.com (Jan. 26, 2023), https://www.ancestry.com/c/legal/privacystatement.

[5] Amended Class Action Complaint at 8, Bridges v. Blackstone, No. 21-cv-1091-DWD, 2022 LEXIS (S.D. Ill. Jul. 8, 2022), 2022 WL 2643968, at 2

[6] Ill. Comp. Stat. Ann. 410/30 (LexisNexis 2022).

[7] Id.

[8] See Bridges, 66 F.4th at 689-90.

[9] Id. (“we cannot plausibly infer that a run-of-the-mill corporate acquisition, without more alleged about that transaction, results in a compulsory disclosure”).

[10] 410/30 (LexisNexis 2022).

[11] Bridges, 66 F.4th at 689.

[12] Id. at 690.

[13] Id. at 689.

[14] Brief of the Defendant-Appellee at 41, Bridges v. Blackstone, 66 F.4th 687 (7th Cir. 2023), (No. 22-2486)

[15] 410/30 (LexisNexis 2022).

[16] Bridges, 66 F.4th  at 689 (Scudder, CJ.) (explaining that “[t]he dearth of Illinois precedent examining GIPA makes this inquiry all the more challenging”).

[17] Compel, Merriam-Webster.com, https://www.merriam-webster.com/dictionary/compel (last visited Sep. 9, 2023).

[18] See supra note 11, at 690.

Disclosure of Teen’s Facebook Messages Should be a Red Flag for Us All

Blog

By Will Simpson, Class of 2025

Amidst the fallout of the Supreme Court’s decision on June 24, 2022, to overturn the cornerstone abortion case of 1973, Roe v. Wade, a privacy issue has surfaced: the extent to which digital data can be used against us to prosecute novel forms of criminalized behaviors. To make matters worse, tech giants such as Facebook and Google—who collect and largely control this data—are legally obligated to assist governments with this invasive practice.

Why should we care? While the Fourth Amendment helps protect Americans against unreasonable searches and seizures by the government, private companies are not restricted from archiving our digital data. As a result, the details of our online lives are preserved for potential access by government warrants. Continue reading

The Legal Footholds of Three States and the District of Columbia Against a Technological Goliath

Written by Hannah G. Babinski, Class of 2024 

I. Introduction

To no one’s surprise, Big Tech is in trouble yet again for attempting to overstep the boundaries of consumer privacy. From the notorious Facebook controversy involving Cambridge Analytica in 2018 to the most recent ballad of chronic misinformation stemming from Spotify’s perpetuation of Joe Rogan’s podcast, it seems that Big Tech’s complacency or even compliance with problematic practices connected to its online presence consistently leaves many Americans scratching their heads. Google is the latest tech conglomerate to stumble in the public arena.

This is not a historic moment for the California-based tech giant whose business model is heavily dependent on its prolific digital advertising, collection, surveillance, and auction of user data, including location tracking which alone earned the company an estimated $150 billion dollars in 2020.[1] In October 2020, the U.S. Justice Department and eleven states sued Google in federal court, alleging that Google abused its dominance over the search engine market—comprising 90% of web searches globally—and online advertising.[2] Then, in December of 2020, ten states separately sued Google in federal court on the grounds of alleged anti-competitive conduct.[3] Undoubtedly, Google’s utter electronic control over the online market is equally as impressive as it is troubling—a sentiment resounded by the bombardment of state-instigated suits—but it pales in comparison to the basis of the most recent lawsuit.

Continue reading