logo_reaching-critical-will

16 May 2014, Vol. 1, No. 4

Editorial: Meaningful deliberation
Ray Acheson | Reaching Critical Will of WILPF


Download full PDF

As the legal discussion continued on Thursday, a number of delegations and civil society groups contested the relationship between autonomous weapons and the law as set out by the panelists on Wednesday. All three legal scholars had argued that existing international law is sufficient to regulate the use of autonomous weapons and that compliance with the law would be better with the use of such weapons than the use of existing weapons by human soldiers. Yet as several other experts have since pointed out, neither of these assertions are certain and they largely rely on assumptions about possible future technological developments. There is a growing recognition that meaningful human control must be retained over any decision to undertake an attack.

Experts not represented on the legal panel strongly disputed the idea that the use of an autonomous weapon is less likely to result in the commission of war crimes such as rape. This justification for autonomous weapons, argued Heather Roff of the International Committee for Robot Arms Control (ICRAC), “is politically myopic and insensitive. We should remember that rape in war is not only the act of individuals,” she pointed out, “but has often been an instrument of state policy.” This point concurs with an article published recently by Charli Carpenter of the University of Massachusetts, Amherst and Article 36. Carpenter argues that behind this assertion that robots will not rape lies an assumption that rape in war is a crime opportunistically committed by rogue soldiers or militia, rather than on an order from the state.

Carpenter notes that those who perpetuate the argument that the use of autonomous weapons would result in the commission of fewer war crimes “seem to mean fewer war crimes than the average human soldier whose state wants her not to commit war crimes.” She argues that these individuals overlook the number of war crimes that occur because soldiers obey unlawful orders from the state. Human Rights Watch has actually suggested that an autonomous weapon would be more likely to carry out unlawful orders if programmed to do so, due to its lack of emotion and morality. Human rights scholar Dara Cohens similarly notes that because robots lack compassion, they “might have the ability to perform horrible tortures that most humans would find repugnant or unbearable,” including rape.

Christof Heyns, UN Special Rapporteur for extrajudicial, summary or arbitrary executions, emphasized the potential that autonomous weapons have to violate the right to human dignity. He and Professor Peter Asaro of ICRAC highlighted the inherent inhumanity of an algorithm determining whether a human being lives or dies. The human rights to life, dignity, and due process imply a duty not to delegate the use of lethal force to an autonomous machine, argued Professor Asaro. In addition to the inherent inhumanity, he noted, it risks reducing accountability and responsibility for crimes.

Questions about accountability and responsibility are applicable across technical, moral, and legal considerations. The US delegation and Professor Waxman argued that autonomous weapons would be subject to human involvement because commanders will make judgments about the operational context and will make the decision to deploy the weapons. But as David Akerson of ICRAC pointed out, with an autonomous weapon, the targeting and killing process would be undertaken by the machine after it is deployed. At the time of the attack, there would be no human in the loop. Akerson also pointed out that in the future, autonomous weapons might not even be programmed by human beings, noting that coding is also becoming increasingly automated.

The key conclusion to be drawn from these concerns is the clear need for meaningful human control over every individual attack.

Some delegations appear to suggest that there is lack of agreement on how best to proceed. However, as Heyns and others have noted over the past few days, there seems to be emerging consensus on the requirement for meaningful human control over the decision to undertake an attack. Most interventions have highlighted this factor as a key condition for the operation of weapon systems, but discussion is still necessary about how to measure and understand such control.

Civil society has already begun to interrogate the concept of meaningful human control. Article 36 argues in a briefing paper that a key factor “is the information available to those responsible for weapon use, about the target, the target context and the physical effects the weapons will cause.” Bringing it back to the foundations of morality, however, Article 36 and Dr. Asaro have also argued that meaningful human control requires the creation of sufficient space for moral reasoning and deliberation to take place.

“One way to consider the link between legality of autonomous weapons and meaningful human control,” suggested Thomas Nash of Article 36, “is to consider that the principles of humanity—on which existing international humanitarian law and international human rights law are based—can be seen to require deliberative moral reasoning, by human beings, over individual attacks.” This position means that weapons that do not allow for such control must be prohibited. As both Article 36 and Human Rights Watch argued, the most effective way to make this explicit is through the creation of new international law.

It is imperative that any outcome from this expert meeting help set the stage for future work on this issue within the CCW. Continued discussions under an expanded mandate would provide an effective opportunity for states to continue exploring concepts such as meaningful human control and to begin work on developing an international prohibition of weapon systems operating outside of such control.

[PDF] ()