LAWS Need Laws: Distinction and Proportionality in the Age of Autonomy
Steve Hammerton
I. Introduction
There’s a lethal autonomous elephant in the room, and it’s only minimally regulated by DoD Directive 3000.09 (“DODD 3000.09”).[1] Under that directive, lethal autonomous weapon systems (LAWS) are said to be “weapon system[s] that, once activated, can select and engage targets without further intervention by an operator.”[2] In contrast to other nations who have called for an outright ban on such systems, the United States has resisted.[3] Instead, the Department of Defense (“DoD”) has required that LAWS, like all other weapons systems, “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”[4] A quick read of this policy would suggest that it requires a human-in-the-loop. However, a more exacting analysis of the language reveals that it only requires “human judgment over the use of force” which only seems to refer to broad themes of lethality like when and where it will be deployed, but not against whom. The directive also refers to an inchoate review process that does not spell out a clear framework for assessing the efficacy and safety of LAWS.[5] Without a clearer statement on the “appropriate levels of human judgment,” the lack of distinction in targeting conflicts with the two core jus in bello principles, distinction and proportionality.[6]
At the same time, LAWS may offer a comparative advantage over human trigger pullers. Canadian think-tank Centre for International Governance Innovation suggests that LAWS “may be able to assess a target’s legitimacy and make decisions faster and with more accuracy and objectivity than fallible human actors could.”[7] Simply put, LAWS could reduce unintended errors or deliberate unlawful killings. Indeed, technology-assisted precision weapons have already reduced collateral damage in armed conflicts.[8] Recent conflicts have been marked by an increased use of autonomous and AI-assisted weaponry, though it is too early to say whether the use of these weapons has identifiably reduced unintended civilian casualties.[9] With the increasing shift to LAWS and other AI-assisted weapons, it seems unrealistic to expect an outright ban. Consequently, the United States and its international partners should seek to preserve distinction and proportionality through a meaningful and complex review, such as a risk-benefit analysis, that recognizes the inherent dangers of using LAWS but appreciates the potential for harm reduction.