Spoiled for Choice: AI Regulation Possibilities

Spoiled for Choice: AI Regulation Possibilities

William O’Reilly

 

I. Introduction

Americans want innovation and they believe advancing AI benefits everyone.[1] One solution to encourage this is to roll back regulations.[2] Unfortunately, part and parcel with the innovations are several harms that are likely to result from the inappropriate use of personal and proprietary data and AI decision-making.[3]  There is an option to ignore this potential harm and halt regulations to encourage the spread of personal information.[4] This option is not in the best interest of the country because the U.S. is already losing the innovation race in some respects. Innovation can still occur despite heavy regulations. Virginia is the latest state to pursue the “no regulation” strategy, and it provides a good microcosm to highlight the challenges and advantages of this approach.[5] Virginia’s absence of regulation falls on a spectrum of legislation that demonstrates options for states to protect rights and innovation. As this article discusses further, curbing AI regulation on companies will not advance innovation enough to justify the civil rights violations perpetuated by current AI use.

II. The AI policy approach has changed.

AI policy recently took the spotlight because of a few different events that have happened this past year. Recently, the governor of Virginia vetoed a bill that would regulate AI within the state because it would “establish a burdensome artificial intelligence regulatory framework.”[6] He believed that Virginia Executive Order 30 (2024) was sufficient for safeguarding against AI harms.[7] His goal with this veto is to prompt the interest of various companies, “including AI innovators . . . .”[8] The Governor would prefer to rely on current consumer protections to protect consumer privacy and other rights.[9] This sentiment is in line with the federal administration’s goals as well.[10]

This veto came after President Trump’s announcement, on the second day of his presidency, that AI would be a top priority for the United States.[11] Between the announcement and the veto, there was an important event: China’s DeepSeek Large Language Model (LLM)–a powerful AI that can receive natural language instructions and generate natural language responses–disrupted US stock markets because the model is roughly equivalent to premier US models but was much less expensive to create.[12] DeepSeek created a competitive model for about 5% of the cost.[13] This event sheds light on the capabilities of certain types of governments and their ability to create technological innovation.

There is a political science theory that authoritarian regimes have an advantage in the technology race for several reasons.[14] Democracy encourages innovation, but authoritarian regimes can move resources around more quickly and are not barred by ethical and monetary costs.[15] This theory points out that authoritarian countries without protections from the use of data by the government can use that data to cement control and develop tools that perpetuate the cycle more efficiently.[16] Simply put, the reasoning is that no matter how deregulated the United States becomes, it will never achieve the maximization that authoritarian countries achieve. Thus, the marginal innovation gained from not regulating existing technologies is small when compared to the potential harm to consumers from civil rights violations.

Importantly, DeepSeek did not make a new kind of AI. They just made the model more efficient.[17] Additionally, DeepSeek did not create new technology like quantum computing. Microsoft did, in the U.S.[18] Additionally, NVIDIA has chosen to move more of its production to the United States.[19] New technologies and techniques can still be discovered under U.S. regulation, and that is the kind of innovation the U.S. should try to preserve instead of ethically dubious efficiency improvements. What that looks like will depend on the state, and there are already useful examples with different levels of regulation.[20]

III. Details: What Regulation Looks Like

Regulation can look different depending on the goals and needs of the state or country. Virginia regulates just the government’s use of AI and leaves the private sector unregulated.[21] Utah has some consumer protections, but generally limits its regulation of the private sector.[22] If Virginia’s bill had passed, regulations on the private sector would have been present and felt by many AI start-ups.[23] Finally, the EU demonstrates a rights-focused approach that puts rights ahead of AI development and deployment.[24] By comparing each of these approaches, legislators can find a model that reflects their constituencies’ preferences for the balance between innovation and regulation.

Virginia’s AI Bill did not pass because the Governor believed it stifled innovation.[25] As a result, only the state government was limited in its use of AI.[26] The executive order that the Governor of Virginia relies on creates standards for the state government to follow for its use of AI.[27] The order acknowledges the role and harm of AI use in government agencies, educational institutions, and law enforcement.[28] It also establishes a task force to review government and academic institutions to ensure they are using AI ethically.[29] States may want to follow this model when they are concerned about the danger posed to rights from government decisions made by AI, are not enticed by the marginal costs reduced by greater government efficiency, and are equally unconcerned with corporate harms. A regulated government with an unregulated private sector provides an unregulated space for AI developers and deployers to create and sell products. The next level of regulation dips its toe into regulating the private sector.

Utah is a business-focused state.[30] Their privacy law sought to be business-friendly by having a narrow threshold, excluding personal data.[31] They have passed an AI policy act that does more than govern the actions of government entities.[32] The act amends consumer protection law to add liability for an entity hiding that a decision or communication was made by AI.[33] The Utah law creates an Artificial Intelligence Office whose mandate is to consult with business stakeholders and establish an innovation-centered program to research AI and its risks.[34] This law provides a single rule: tell people when you are using AI.[35] The remainder of the text is a promise to consult businesses before taking any more steps.[36] This is the first step toward consumer protection, with some baseline consumer protections. The proposed, now failed, Virginia bill added more regulations with less business considerations.

Virginia’s proposed bill was one step closer to the strict EU framework and focused on industry regulation rather than consumer protection. The proposed AI bill limited its scope to AI on the market.[37] The bill designates AI that is “specifically intended  to autonomously make, or be a substantial factor in making, a consequential decision.”[38] The bill immediately begins listing broad exceptions to this categorization.[39] These exceptions are relevant because the bulk of the regulations in the bill govern the actions of developers and deployers of high-risk AI.[40] The Virginia AI Bill divides enterprises and individuals into deployers and developers and applies slightly different standards to each group.[41] These standards are more burdensome and a more direct duty than the notice requirements in the Utah law, but do not require constant reporting or limitations on all AI systems or all parties interacting with AI.[42] These limitations are still less stringent than the EU AI Act, which it models.

The EU AI Act does not regulate specific technology, but it takes a risk-based approach, which makes it more sweeping by applying to any entity that could create a risk, not just developers and deployers.[43] The law applies to providers, deployers, importers, and distributors of AI.[44] The Act provides regulations on every function of AI, not just high-risk like the proposed Virginia law.[45] Further, these regulations apply to each stage of the supply chain.[46] These heavy regulations have not completely stunted innovation, but they do create costs. There are AI-based startups in Europe.[47] These came into existence or found success before and after the EU AI Act was passed.[48] In addition to the AI Act, the start-ups also sprouted within the confines of the GDPR, which has been around for around five years.[49] The EU AI Act governs these enterprises and provides regulations for deployers, developers, and other groups.[50] Heavy regulation does not completely stunt innovation and technology, but such regulations are much stronger than Virginia or Utah and may not be the preference of U.S. citizens.[51]

IV. Conclusion

There are several AI bills at various stages of the legislative process in the United States.[52] California, Utah, and Colorado are forerunners as the only states with signed bills.[53] These states were also forerunners in privacy legislation.[54] The remainder of the states have options. Without taking an approach, the country will linger in uncertainty. States should take up the banner of one of these versions of regulation. As the U.S. struggles to keep up in national markets, it should not forget the rights of its citizens. This variety of approaches gives all states an option to protect rights and protect innovation. Because both interests can be considered, all states should adopt their preferred form.

 

References

[1] See e.g., Letter from the Governor of Virginia, vetoing H.B. 2094, 2025 Gen. Assemb., Reg. Sess. (Va. 2024) (Mar. 24, 2025) (recorded in the 2024 Legis. Bill Hist.); Rob Garver, Trump Signals Aggressive Stance as US Races China in AI Development, Voice of America (January 22, 2025, 7:11 PM), https://www.voanews.com/a/trump-signals-aggressive-stance-as-us-races-china-in-ai-development/7947068.html.

[2] Letter from the Governor of Virginia, vetoing H.B. 2094, 2025 Gen. Assemb., Reg. Sess. (Va. 2024) (Mar. 24, 2025) (recorded in the 2024 Legis. Bill Hist.).

[3] Federal Trade Commission, Combatting Online Harms Through Innovation, Report to Congress, 2 (June 16, 2022) (expounding on AI’s role in perpetuating or exacerbating harms that exist in everyday life).

[4] Letter from the Governor of Virginia, vetoing H.B. 2094, 2025 Gen. Assemb., Reg. Sess. (Va. 2024) (Mar. 24, 2025) (recorded in the 2024 Legis. Bill Hist.)

[5] Id.

[6] Id.

[7] Id. See also Va. Exec. Order No. 30 (Jan. 18, 2024).

[8] Letter from the Governor of Virginia, vetoing H.B. 2094, 2025 Gen. Assemb., Reg. Sess. § 59.1-607 (Va. 2024) (Mar. 24, 2025) (recorded in the 2024 Legis. Bill Hist.).

[9] Id.

[10] Garver, supra note 1.

[11] Id.

[12] Simon Thorne, Putting DeepSeek to the Test: How its Performance Compares Against Other AI Tools, TheConversation (February 4, 2025, 1:33 PM), https://theconversation.com/putting-deepseek-to-the-test-how-its-performance-compares-against-other-ai-tools-248368.

[13] Garver, supra note 1.

[14] See Samuele Domini, Will Authoritarian Regimes Lead in the Technological Race, Scientific American, 162, 166-161(2017).

[15] Id.

[16] Id. at 163.

[17] Thorne, supra note 12.

[18] Chetan Nayak, Microsoft Unveils Majorana 1, the World’s First Quantum Processor Powered by Topological Qubits, Microsoft (Feb. 19, 2025), https://azure.microsoft.com/en-us/blog/quantum/2025/02/19/microsoft-unveils-majorana-1-the-worlds-first-quantum-processor-powered-by-topological-qubits/.

[19] NVIDIA Newsroom, NVIDIA to Manufacture American-Made AI Supercomputers in US for First Time, NVIDIA (April 14, 2025), https://blogs.nvidia.com/blog/nvidia-manufacture-american-made-ai-supercomputers-us/.

[20] See 2024 Legis. Bill Hist. VA H.B. 2094; 2025 UT. HB 452 (LexisNexis 2024); see also 2024 OJ (L 1689).

[21] Letter from the Governor of Virginia, vetoing H.B. 2094, 2025 Gen. Assemb., Reg. Sess. (Va. 2024) (Mar. 24, 2025) (recorded in the 2024 Legis. Bill Hist.).

[22] S.B. 149 Gen. Sess. (Ut. 2024).

[23] See H.B. 2094, 2025 Gen. Assemb., Reg. Sess. § 59.1-607 (Va. 2025).

[24] 2024 OJ (L 1689).

[25] Letter from the Governor of Virginia, vetoing H.B. 2094, 2025 Gen. Assemb., Reg. Sess. (Va. 2024) (Mar. 24, 2025) (recorded in the 2024 Legis. Bill Hist.).

[26] Va. Exec. Order No. 30 (Jan. 18, 2024).

[27] Id at 1.

[28] Id at 2-3.

[29] Id.

[30] Sonia Coverig, Governor Cox Unveils “Built Here” Agenda at Silicon Slopes Summit 2025, TechBuzz News (Jan. 20, 2025), https://www.techbuzznews.com/governor-cox-unveils-built-here-agenda-at-silicon-slopes-summit-2025/.

[31]UCPA: The Most Business-Friendly Privacy Law, Planet9, https://planet9security.com/ucpa-the-most-business-friendly-privacy-law/ (last visited April 15, 2025). See generally Utah Code Ann. § 13-61-304(5) (LexisNexis 2023).

[32] S.B. 149 Gen. Sess. (Ut. 2024).

[33] Utah Code Ann. § 13-2-12(3) (LexisNexis 2024).

[34] Utah Code Ann. § 13-70-201, 202 (LexisNexis 2024) amended by S.B. 149 2024 Leg. Gen. Sess. (Ut. 2024).

[35] Utah Code Ann. § 13-2-12(3) (LexisNexis 2024).

[36]  Utah Code Ann. § 13-70-201, 202 (LexisNexis 2024) amended by S.B. 149 2024 Leg. Gen. Sess. (Ut. 2024).

[37] H.B. 2094, 2025 Gen. Assemb., Reg. Sess. § 59.1-607 (Va. 2025).

[38] Id.

[39] Id.

[40] H.B. 2094, 2025 Gen. Assemb., Reg. Sess. §§ 59.1-608 and 59.1-609 (Va. 2025).

[41] Id.

[42] Id.

[43] See 2024 OJ (L 1689) Art. 2 (2).

[44] Id at Art. 2 (1).

[45] Compare 2024 OJ (L 1689) Art. 5 (1) with H.B. 2094, 2025 Gen. Assemb., Reg. Sess. (Va. 2025).

[46] 2024 OJ (L 1689) Art. 2 (3).

[47] See Tamara Djurickovic, The top European AI Companies of This Year, Tech.eu (December 27, 2023), https://tech.eu/2023/12/27/the-top-10-european-ai-companies-of-2023/; see also Kitty Wheeler, Why the EU’s AI Investment Plan is a Global Game-Changer, Technology.magazine (February 19, 2025), https://technologymagazine.com/articles/european-companies-launch-major-ai-initiative.

[48] See Djurickovic, supra note 47; see also Wheeler, supra note 47; 2024 OJ (L 1689) preamble.

[49] 2016 O.J. (L 119) Art. 99 § 2 (Hereinafter GDPR)

[50] Compare 2024 OJ (L 1689) Art. 2(3), (4), (7), Art. 6 with H.B. 20942025 Gen. Assemb., Reg. Sess. § 59.1-607, 608, 609 (Va. 2025).

[51] See Garver, supra note 1.

[52] US State AI Governance Legislation Tracker 2025, IAPP, https://iapp.org/resources/article/us-state-ai-governance-legislation-tracker/ (last visited April 13, 2025).

[53] Id.

[54] Id.