logo_reaching-critical-will

16 April 2015, Vol. 2, No. 4

Editorial: The unbearable meaninglessness of autonomous violence
Ray Acheson | Reaching Critical Will of WILPF and Richard Moyes | Article 36


Download full edition in PDF

“If we can no longer make sense of violence it is no longer meaningful,” said Maya Brehm of the Geneva Academy of International Humanitarian Law and Human Rights during a panel on the characteristics of autonomous weapon systems.

What gives violence meaning? Anthropologists have described violence as a form of communication. Its meaning then is developed through the social practices that generate all meaning; through the processes of human interaction by which we form moral, cognitive, and conceptual understandings of our world.

What meaning could be derived from violence administered entirely by machine? Without meaningful human control, such machines would not be enacting a human will towards a specific act of violence. Rather they would represent a social acceptance that human beings can be processed or put in harms way simply as objects, subjected to an abstract calculus. Allowing weapons that identify, select, and apply force to targets without human supervision or intervention means relinquishing or dissipating human responsibility. Simultaneously it means dehumanising those we expose to harm. It means an erosion of humanity.

How would our legal, moral, and ethical frameworks be applied to machine-based violence? These frameworks are uniquely human, developed collectively by humans through our social practices and interactions. Accepting that humans can be taken out of the process of choosing which and how other people may be harmed, killed, or put at risk would likely suggest that socially we had found a new capacity to separate ourselves from people elsewhere. That we are prepared to contort legal, moral, and ethical reasoning in order to allow to be done to others something we would never accept being done to us.

The interplay of these considerations with the discussion on technical issues further solidifies the meaninglessness of the violence of autonomous weapons. Panelists and delegations overwhelmingly presented a grim picture of the potential downsides of such systems. Among other things, they outlined the risk that flaws in programming or software, vulnerabilities to misuse or attack, or inadequate testing could lead to unintended or malicious effects.

What meaning could be derived from violence arising from errors, malfunctions, misuse, exploitation, or dissonance within an autonomous system? What meaning could be derived even if the machine functioned as designed and programmed, if the “choices” of who is attacked and why are determined by a machine? The question of why specific people were attacked would always have one of two immediate answers: either “that is what the machine does in these circumstances” or “something went wrong”. Neither answer means very much.

What restraints might there be on machine-based violence? Echoing concerns voiced by some delegations that autonomous weapons could lower the threshold for the use of force, Elizabeth Quintana of the Royal United Services Institute cautioned that such systems might provide increased capacity for military intervention. The delegation of Palestine argued that autonomous weapons would reduce possibilities for dialogue and peace by removing human engagement from the battlefield.

The concept of meaningful human control is crucial for preventing the inherent meaninglessness of machine-based violence as well as the increased potential for such violence. Jason Millar of Carleton University noted that meaningful human control is about maintaining something uniquely human that autonomous weapons would lack. But while he suggested that such control could potentially be administered over some autonomous weapon systems, it is still more important that the concept is collectively developed to establish boundaries against systems and configurations that should not be developed or used. As the civil society group Article 36 explained in an intervention from the floor, requiring meaningful human control over the use of force should provide the foundation for an international prohibition on fully autonomous weapons.

Yet those speaking on Wednesday afternoon’s legal panel pushed back on this approach. Dr. William Boothby of the Geneva Centre for Security Policy and Professor Eric Talbot Jensen of NYU Law School rejected calls for a prohibition, arguing that weapons reviews mandated by article 36 of the Geneva Convention’s Protocol I are the more “appropriate” way to handle the development of autonomous weapons. While acknowledging potential difficulties of autonomous weapons complying with international humanitarian law, both speakers believed that technology could hypothetically develop to a point where it would be possible to deploy weapons that were fully autonomous.

Most intervening delegations and civil society groups, however, argued that humans must always be involved in the use of force. They largely seemed to agree that the rules of IHL must be applied, by humans, on an attack-by-attack basis, taking into account the specific circumstances of each attack, and that such assessment could not adequately be left to a machine. As the NGO Article 36 noted, “Processes of calculation and computation in a machine are not equivalent to deliberative human reasoning within a social framework. Machines do not make ‘legal judgments’ and  ‘apply legal rules’.”

Some speakers and delegations, including the ICRC, also highlighted the problem of leaving the determination of the legality of autonomous weapons up to individual countries through weapons reviews. A multilateral response is necessary, and must not be based on hypothetical technical considerations or varying interpretations of existing legal mechanisms. Rather, as the delegation of Greece argued, our approach to autonomous weapons should be based on the ethical question of whether or not humans should delegate life and death decisions to machines.

The text of the CCW affirms the “need to continue the codification and progressive development of the rules of international law applicable in armed conflict.” This recognition that the law is not static and that the general rules of armed conflict are not wholly sufficient to address the problems raised by certain weapon technologies is the cornerstone of the CCW regime. It is by prohibiting fully autonomous weapons that humans can collectively decide to prevent the meaninglessness of machine-based violence. A number of states have claimed that the CCW is the most appropriate forum within which to address this issue—they now need to take steps to justify that claim.

[PDF] ()