LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy
Steve Hammerton
I. Introduction
There’s a lethal autonomous elephant in the room, and it’s only minimally regulated by DoD Directive 3000.09 (“DODD 3000.09”).[1] Under that directive, lethal autonomous weapon systems (LAWS) are said to be “weapon system[s] that, once activated, can select and engage targets without further intervention by an operator.”[2] In contrast to other nations who have called for an outright ban on such systems, the United States has resisted.[3] Instead, the Department of Defense (“DoD”) has required that LAWS, like all other weapons systems, “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[4] A quick read of this policy would suggest that it requires a human-in-the-loop. However, a more exacting analysis of the language reveals that it only requires “human judgment over the use of force” which only seems to refer to broad themes of lethality like when and where it will be deployed, but not against whom. The directive also refers to an inchoate review process that does not spell out a clear framework for assessing the efficacy and safety of LAWS.[5] Without a clearer statement on the “appropriate levels of human judgment,” the lack of distinction in targeting conflicts with the two core jus in bello principles, distinction and proportionality.[6]
At the same time, LAWS may offer a comparative advantage over human trigger pullers. Canadian think-tank Centre for International Governance Innovation suggests that LAWS “may be able to assess a target’s legitimacy and make decisions faster and with more accuracy and objectivity than fallible human actors could.”[7] Simply put, LAWS could reduce unintended errors or deliberate unlawful killings. Indeed, technology-assisted precision weapons have already reduced collateral damage in armed conflicts.[8] Recent conflicts have been marked by an increased use of autonomous and AI-assisted weaponry, though it is too early to say whether the use of these weapons has identifiably reduced unintended civilian casualties.[9] With the increasing shift to LAWS and other AI-assisted weapons, it seems unrealistic to expect an outright ban. Consequently, the United States and its international partners should seek to preserve distinction and proportionality through a meaningful and complex review, such as a risk-benefit analysis, that recognizes the inherent dangers of using LAWS but appreciates the potential for harm reduction.
II. Lethality with Distinction
A risk-benefit analysis for the development and use of LAWS will undoubtedly need to weigh jus in bello principles. This principle “obliges parties to a conflict to distinguish principally between the armed forces and the civilian population.”[10] In practice, this creates two obligations: “(1) [to] discriminat[e] in conducting attacks against the enemy; and (2) [to] distinguish[] a party’s own persons and objects.”[11] Revisiting DODD 3000.09, there is some consideration of distinction, though, it is limited to implementing safety features and conducting testing to safeguard against “the potential consequences of an unintended engagement.”[12] With some exemptions, DODD 3000.09 achieves this goal by subjecting autonomous weapons systems to a technical review prior to further development or implementation.[13] The described process does implicate distinction by directing the legal review component of the review process to the law of war program,[14] however, a more explicit statement requiring a reflection on jus in bello principles may better identify the shortcomings and benefits of a LAWS in connection with distinction. A clear risk-benefit analysis weighing the ability of a LAWS may elucidate the level of human judgment, if any, needed in real world environments.
That being said, assessing the risks and benefits of a LAWS should necessarily contemplate whether a human-in-the-loop actually mitigates any risk to distinction. At times, it does not. Take the shootdown of an Iranian passenger jet by the USS Vincennes, for example. In that incident, the Vincennes, a United States Navy missile cruiser operating in the Arabian Gulf, mistook a passenger jet for an Iranian warplane on a course to strike the Vincennes.[15] The mistake largely hinged on the failure of the crew to properly utilize information provided by the ship’s advanced computer-assisted targeting system.[16] First, the targeting system showed that the aircraft was continuously “ascending while being tracked, but the crew informed the captain that the unidentified aircraft was descending into attack position.”[17] Second, the crew misidentified the aircraft’s transponder as one associated with military aircraft, but the targeting system accurately identified the transmitted signal as one associated with civilian aircraft.[18] The combination of these manmade mistakes and the chaos of a combat situation ultimately led the ship’s captain to make the tragic, but avoidable, decision to engage a passenger jet full of innocent civilians. In the case of the Vincennes incident, distinction was adequately upheld by a computer but ultimately sidestepped by the humans in the loop.
The example of the Vincennes is not to advocate for removing humans from the loop. Instead, the benefits of integrating humans into a loop should be weighed against the risks. In its current form, DODD 3000.09 does not absolutely require a human operator in all contexts of LAWS development or use, and future iterations should not either. While there is an interest in preserving the perceived dignity or morality of having a human entirely control the lethal process of warfare,[19] this interest should not come at the cost of distinction, safety, or risk-mitigation. A risk-benefit analysis for LAWS could help decision-makers fully work through the safety of a fully autonomous system or one that requires human judgment. Ultimately, the relationship between LAWS and humans should be reciprocal. Humans should seek to improve the safety and efficacy of LAWS, and LAWS should reduce human bias and improve distinction.
III. Algorithmic Proportionality
An analysis of LAWS would also be incomplete without a review of proportionality. Somewhat nebulously defined, proportionality is described as a “principle that even where one is justified in acting, one must not act in a way that is unreasonable or excessive.”[20] This begs the question, will a fully autonomous system be able to determine whether its own actions are both justified and not unreasonable or excessive? The immediate answer is no. However, some scholars have suggested using the Collateral Damage Estimation Methodology (“CDEM”) to aid LAWS in algorithmically predicting the potential for collateral damage.[21] CDEM is a five-stage analytical framework “employed to assess collateral damage based on considerations such as the area of effect of different weapon types, the demographics in the intended attack area and the impact of timing on the likely level of civilian casualties.”[22] Performed prior to an engagement, “CDEM provides a numeric estimate of the number of civilians who may be injured or killed if the attack goes forward.”[23] While AI-assisted systems may not be capable of reason, they do have a high capacity for making predictions and identifying patterns with a high degree of accuracy when provided with quality data.[24] Absent human reasoning, incorporating predictive frameworks like CDEM into the design and implementation process could mitigate risks and improve the compliance of LAWS with ethical and legal standards.
Instead of being an amorphous principle, proportionality could be a key metric in the regulation of LAWS. In fact, the concept of proportionality is perhaps the loudest proponent of a risk-benefit analysis, especially in connection with the implementation of LAWS in a combat setting.[25] A review of LAWS ability to maintain proportionality may resemble a meta-analysis, defined as “a quantitative, formal, epidemiological study design used to systematically assess previous research studies to derive conclusions about that body of research.”[26] For instance, a senior reviewer could use historic CDEM data, collateral damage data from past armed conflicts, and current testing data from the LAWS subject to review. By treating proportionality as a measurable standard rather than a vague principle, regulators and reviews can systematically assess the risks and benefits of LAWS, ensuring their deployment and development aligns with legal norms in combat settings.
IV. A Just Risk?
The pervading concern surrounding risk-benefit analyses is whether and how the value of a human life can be quantified and adequately weighed.[27] This question is especially relevant to the pursuit and regulation of LAWS. Drawing from the influential Declaration of Helsinki which has guided many regulations in the domain of biomedicine, regulators of LAWS “should carefully consider how the benefits, risks, and burdens are distributed”[28] amongst LAWS deployers, enemy combatants, and innocent civilians. The use of LAWS is a high stakes gambit that stands to preserve life but also to take it. With that in mind, the evaluation of LAWS should be as robust and explicit as the analyses applied in scenarios that risk human suffering, like biomedical research. To approve human subject research, risks to research subjects must be “reasonable in relation to anticipated benefits”[29] and risks to subjects must be “minimized . . . [b]y using procedures which are consistent with sound research design, and which do not unnecessarily expose subjects to risk.”[30] Applying similar requirements to LAWS can be simply achieved by replacing the word subject with civilian or non-combatant. Doing so may help human reviewers better understand the dangers posed and benefits offered by LAWS to parties not engaged in combat. In tandem with adequate testing, it should also inform the level of human judgment necessitated by individual systems.
Achieving the policy set forth by DODD 3000.09 requires a nuanced approach which ensures that unjustifiable risks are minimized while potential benefits are maximized. For LAWS, weighing benefits, risks, and burdens of each party to a potential engagement requires careful consideration of distinction and proportionality. Further, the application of a risk-benefit analysis for autonomous weapons will likely require carefully crafted categories of risk. There are situations in which autonomous weapons are unjustifiably risky and others where they are universally beneficial. For example, LAWS that do not outperform human counterparts in distinguishing between combatants carry an unjustifiable risk. In contrast, the benefit of LAWS that significantly reduce the likelihood of excessive destruction is likely to overcome the weight of its associated risk.
V. Conclusion
This discussion is not an endorsement of LAWS or other autonomous weapons. Instead, it is a recognition of the fact that AI will have an increasing role in the future of warfare. As it stands, global superpowers such as China, Russia, and the United States oppose bans to LAWS and are actively pursuing technological advancements to support their autonomous capabilities.[31] Autonomy is inevitable. DODD 3000.09 has laid the foundation for the regulation of LAWS though it is incomplete. Augmenting DODD 3000.09 with a clear and robust risk-benefit analysis informed by jus in bello principles may help to ensure that LAWS are developed and used in ways that preserve humanity in its gravest moments.
References
[1] Dep’t of Def., Dir. 3000.09, Autonomy in Weapon Systems (Jan. 25, 2023) [hereinafter DODD 3000.09].
[2] Id. at 21.
[3] Kelley M. Sayler, International Discussions Concerning Lethal Autonomous Weapon Systems, CRS (Feb. 25, 2025), https://crsreports.congress.gov/product/pdf/IF/IF11294.
[4] DODD 3000.09, supra note 1, at 3.
[5] See id. at 15-18.
[6] Alexander Moseley, Just War Theory, Internet Encyclopedia of Philosophy, https://iep.utm.edu/justwar/ (last visited Mar. 14, 2025) (“The rules of just conduct within war fall under the two broad principles of discrimination and proportionality.”). It is worth briefly noting that in the context of distinction, to discriminate means to distinguish between combatants and non-combatants.
[7] Kyle Hiebert, Are Lethal Autonomous Weapons Inevitable? It Appears So, Ctr. for Int’l Governance Innovation (Jan 27, 2022), https://www.cigionline.org/articles/are-lethal-autonomous-weapons-inevitable-it-appears-so.
[8] See John A. Tirpak, The State of Precision Engagement, Air Force Mag. (Mar. 1, 2000), https://www.airandspaceforces.com/article/0300precision/ (“[O]nly 20 of the approximately 23,000 [precision-guided] munitions expended by NATO in the 1999 Balkan air operation caused collateral damage or civilian casualties. Some others were deliberately steered off course to avoid harming civilians who had not been seen in the target area until the last moment.”).
[9] See, e.g., Samuel Bendett & David Kirichenko, Battlefield Drones and The Accelerating Autonomous Arms Race in Ukraine, Mod. War Inst. (Jan 10, 2025), https://mwi.westpoint.edu/battlefield-drones-and-the-accelerating-autonomous-arms-race-in-ukraine/; Noah Sylvia, The Israel Defense Forces’ Use of AI in Gaza: A Case of Misplaced Purpose, Royal United Servs. Inst. (July 4, 2024), https://www.rusi.org/explore-our-research/publications/commentary/israel-defense-forces-use-ai-gaza-case-misplaced-purpose.
[10] Off. of Gen. Couns., U.S. Dep’t of Def., Department of Defense Law of War Manual § 2.5 (July 2023) [hereinafter Law of War Manual].
[11]Id.
[12] DODD 3000.09, supra note 1, at 4. See also DODD 3000.09 at 23. (The directive defines an “unintended engagement” as, “[t]he use of force against persons or objects that commanders or operators did not intend to be the targets of U.S. military operations, including unacceptable levels of collateral damage beyond those consistent with the law of war, ROE, and commander’s intent.”)
[13] Id. at 15-16. (“If the weapon system in question is to be developed and then fielded by DoD, it will need to
undergo both reviews and receive approvals. A review is not needed if the weapon system is
covered by a previous approval for formal development or fielding.”).
[14] Id. at 17.
[15] Colum Lynch, Anatomy of an Accidental Shootdown, Foreign Pol’y (Jan. 17, 2020), https://foreignpolicy.com/2020/01/17/accidental-shootdown-iran-united-states-ukraine/.
[16] Anthony Tingle, The Human-Machine Team Failed Vincennes, U.S. Naval Inst. (July 2018), https://www.usni.org/magazines/proceedings/2018/july/human-machine-team-failed-vincennes.
[17] Id.
[18] Id.
[19] See Christof Heyns, Rep. Special Rapporteur on extrajudicial, summary or arbitrary executions, U.N. Doc. A/69/265 (describing the use of unmanned systems in lethal contexts and law enforcement contexts as “an affront to human dignity”).
[20] Law of War Manual, supra note 10, at § 2.4.
[21] See Elliot Winter, The Compatibility of Autonomous Weapons with the Principles of International Humanitarian Law, J. Conflict and Sec. L., 1, 11 (Jan 21, 2022).
[22] Id at 16.
[23] Gennaro Balzano, Understanding Collateral Damage in Everyday Life from Military Operations, NATO (Sep. 27, 2024), https://www.nrdc-ita.nato.int/newsroom/insights/understanding-collateral-damage-in-everyday-life-from-military-operations.
[24] See Ajay Agrawal et. al., Generative AI Is Still Just a Prediction Machine, Harv. Bus. Rev. (Nov 18, 2024), https://hbr.org/2024/11/generative-ai-is-still-just-a-prediction-machine (“The efficacy of predictions is contingent on the underlying data. The quality and quantity of data significantly impact the accuracy of AI predictions.”)
[25] See Off. of the Chief of Naval Operations, U.S. Dep’t of the Navy, Naval Warfare Pub. No. 1-14M, The Commander’s Handbook on the Law of Naval Operations 5.3.3 (“The principle of proportionality requires a commander to evaluate whether the expected injury to civilians and damage to civilian objects resulting from an attack would be excessive in relation to the concrete and direct military advantage anticipated from the attack.”).
[26] A B Haidich, Meta-analysis in Medical Research, Hippokratia (Dec. 14, 2010), https://pmc.ncbi.nlm.nih.gov/articles/PMC3049418/.
[27] See W. Kip Viscusi, The Value of Life in Legal Contexts: Survey and Critique, 2 Am. L. and Econs. Rev. 195, 196-201 (describing approaches to assigning numerical value to human life).
[28] WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Participants, World Med. Ass’n, (October 2024), https://www.wma.net/policies-post/wma-declaration-of-helsinki/.
[29] 21 C.F.R. § 56.111(a)(2) (2024).
[30] Id. at § 56.111(a)(1).
[31] Elliot Winter, supra note 21, at 5-6.