logo_reaching-critical-will

CCW Report, Vol. 3, No. 4

Laws for LAWS


Ray Acheson
14 April 2016 

Download full edition in PDF

Wednesday’s discussions focused on potential issues surrounding lethal autonomous weapon systems (LAWS)’s compliance with international law or accountability for violations of such laws. To this end, participants considered weapon review processes and accountability frameworks. The legal challenges posed by LAWS, however, are only one part of the problem. The moral or ethical challenges posed by the delegation of life and death to a machine by taking human beings out of the loop on the selection of targets and individual attacks are equally important. All of these challenges, however, point to the need for meaningful human control and a prohibition on LAWS.

Law

As Switzerland’s paper notes, IHL and human rights law are relevant for governing the development and use of LAWS, while international criminal law governs individual criminal responsibility for violations of these laws. There are a number of rules under each that are relevant for LAWS; during the course of discussions on Wednesday states and civil society seemed most concerned with LAWS’ (in)ability to comply with the IHL principles of prevention, distinction, and proportionality. Questions arise about the ability to programme a machine to respect international law, and about the ability of this programme to adapt to complex, changing environments in the battlefield, which requires ongoing tactical decisions throughout the course of operations.

Accountability

Panellists, states, and civil society also raised concerns about accountability. A fundamental principle of international law is that an individual should be held responsible for actions that violate law. LAWS, Bonnie Docherty of Human Right Watch argued, threaten to undermine that fundamental norm. “They would have the potential to create an accountability gap because under most circumstances a human could not be found legally responsible for unlawful harm caused by the weapons. Operators, programmers, and manufacturers would all escape liability.”

Panellist Neha Jain of the University of Minnesota examined the application of criminal law to LAWS. Control in criminal law is about the ability to execute or obstruct the commission of an offence according to one’s will. This control must exist at the time of the act. LAWS raise significant questions about how someone would exercise control over an autonomous weapon operating without meaningful human control over targeting and attacks.

The United Kingdom argued that accountability would be no different for LAWS than for any other weapon systems, suggesting that the person who deploys the weapon—the commander—would be responsible for it’s actions. Panellist Roberta Arnold argued that a human “in the loop,” such as a programmer, deployer, or operator, can be held accountable for the misuse of LAWS. Under criminal law, she noted that commanders and other superiors are criminally responsible for war crimes. But as Dr. Docherty pointed out,

Command responsibility holds commanding officers indirectly responsible for subordinates’ actions if they knew or should have known their subordinates committed or were going to commit a crime and they failed to prevent the crime or punish the subordinates. If a robot acted in an unforeseeable way after being deployed, a commander could not have known in advance what it would do. It would be unfair and legally challenging to hold a commander responsible in such a situation. At the same time, a commander could be held not be held liable for failing to punish an unpunishable machine.

Robin Geiß of Univerity of Glasgow argued that even if individual accountability is not necessarily attributable, the state deploying the autonomous weapon would be responsible for its unlawful use if it was used by a state actor. WILPF would argue that the state and those individuals involved the development and deployment of LAS have a responsibility to act with due diligence to prevent harm. Arguably the most effective due diligence is preventative, such as a prohibition on weapon systems operating without meaningful human control.

Human control 

Panellist Kimberley Trapp, University College London, noted that human control is clearly required over weapons given all the complexity involved on a battlefield. Switzerland’s paper seems to agree, arguing that “given the current state of robotics and artificial intelligence, it is difficult today to conceive of an AWS that would be capable of reliably operating in full compliance with all the obligations arising from existing IHL without any human control in the use of force, notably in the targeting cycle.” 

This position is consistent with many other delegations speaking at the CCW, as well as many civil society organisations, which have emphasised the need for meaningful human control over critical functions of weapon systems. For the Campaign to Stop Killer Robots and some states, such critical functions include the selection of targets and each individual attack.

As was laid out by the Campaign in a side event on Wednesday, the requirement of meaningful human control over critical functions provides the basis for prohibiting weapon systems that do not have a human being involved in targeting selection and engagement. The requirement of meaningful human control over critical functions could also constitute a test for article 36 national-level weapon reviews of LAWS, as suggested by The Netherlands.

Weapon reviews

Such reviews, while welcome in general, are not sufficient to deal with LAWS on their own. It was welcome to see many presentations from delegations on their weapon review practices and to hear that othrs are currently assessing their practices. However, this approach is not enough to address the challenges of LAWS. As speakers at the side event noted, the majority of countries do not undertake weapon reviews; such reviews are not consistent in approach or content among states nor do they have any international oversight; there is no common definition of weapons, means, and methods of warfare; such reviews would miss ethical considerations, which are imperative to consider when it comes to LAWS; they do not appear to take into account how LAWS will affect the resort to the use of force; and they cannot guarantee that a system will operate the same way in all circumstances, which might undermine the predictability requirement.

Peter Asaro of the International Committee for Robot Arms Control (ICRAC) noted that the law assumes a human agent will take responsibility for decision-making during unpredictable situations on the battlefield. He questioned how such decisions could be built into the machine in advance by a programmer. There is a disconnect between what takes place during a legal weapon review and what might happen on the battlefield from a technical standpoint. 

Prohibiting LAWS

Once again, requiring meaningful human control over the critical functions of a weapon system appears to be the most straightforward approach. It is the only way to ensure that a human being is making the decisions about target selection and each individual attack. It is the only way to meet the legal imperative to hold an individual responsible for unlawful acts. It is the only way to ensure respect for ethics and morality in relation to preventing fully mechanised violence in which machines take human life without any human intervention. Prohibiting weapon systems that would operate without meaningful human control over target selection and attacks is imperative for legal and moral reasons; this is becoming ever more clear as discussions continue at the CCW.

[PDF] ()