logo_reaching-critical-will

CCW Report, Vol. 3, No. 3

What we talk about when we talk about autonomous weapons


Ray Acheson
13 April 2016 

Download full edition in PDF

Tuesday’s panels at the CCW meeting on lethal autonomous weapon systems (LAWS) focused on definitional issues. Many delegations declared that clear definitions of autonomous weapons and meaningful human control are necessary to move forward. Yet many of these same delegations seemed to draw such definitions away from practical approaches, perhaps seeking to define autonomous weapons in a way so futuristic as to be implausible and never to affect any weapon developments they might contemplate. This discussion largely avoided the uncomfortable fact that when we talk about autonomous weapons we are talking about the development of new ways to kill each other—ways that ultimately reduce our own involvement as human beings in that killing. With autonomous weapons, we would abdicate responsibility and accountability for killing by removing our moral agency from that killing, setting the stage for a range of highly problematic challenges to law, ethics, and morality.

Consequences of how we kill

As WILPF noted in its statement on Tuesday, the tools we use to commit violence can determine the form of that violence. Some weapons facilitate certain attacks that might otherwise be impossible. In addition, the choice of what we use to kill each other, how we kill each other, has meaning in itself. These choices are affected by and affect our social relations. When it comes to autonomous weapons, the key factor is about delegating the function and responsibility of targeting, attacking, and killing to machines. What might this permit or result in that other weapons might not? How might this affect our social relations?

The use of weapons operating without meaningful human control—without a human being substantively involved in the selection of targets for and making legal judgements on each individual attack—could lead to the expansion of the concept of an attack, as noted by Richard Moyes of Article 36 in his presentation to the meeting. It could lead to the expansion of the battlefield as armed drones already have done. It could lead to the targeting, death, or injury of people without due process or feasible precautions. 

In terms of social relations, autonomous weapons would undermine equality and justice between countries and people. The features that might make autonomous weapons “attractive” to higher income, technologically advanced countries looking to preserve the lives of their soldiers push the burden of risk onto the rest of the world. As with armed drones, the deployment of autonomous weapons would not likely, in any near future, result in an epic battle of robots, where machines fight machines. They would be unleashed upon populations that might not have defences against them, that might not be able to detect their imminent attack, and that might have no equivalent means with which to fight back. They would be weapons of power, dominance, inequality, and othering.

Autonomous weapons would complete the separation of the body and mind from the battlefield and the process of dehumanisation of warfare. Algorithms would replace violent masculinities to create some kind of perfect killing machine. Turning men and women into warfighters has tended to require breaking down their sense of ethics and morals and building up a violent masculinity that is lacking in empathy and glorifies as strength a violent physical domination over others who are portrayed as weaker. Autonomous weapons would be the pinnacle of a fighting force stripped of the empathy, conscience, or emotion that might hold a human soldier back.

Defining our instruments of killing

These consequences of autonomous weapons and autonomous warfare provide a backdrop to the issue of defining these weapons and our control over them. But like much else in the world, definitions are about politics and power. Some states appear to wish to define away autonomous weapons, treating them as systems that might never exist while they pursue the technologies that exactly fit the conception of a weapon operating without meaningful human control.

As presenter Wendell Wallach of the Yale Interdisciplinary Center for Bioethics argued, it seems like some delegations are making the matter of definitions more complex and problematic than it actually is. After three years of discussions on autonomous weapons, studies by UN agencies, non-governmental organisations, international organisations, and states themselves, as well as examples of semi-autonomous systems already in operation, we have a very clear picture of what a lethal autonomous weapon system would be. As Mr. Moyes said in his presentation, a basic working definition could be “weapon systems with elements of autonomy operating without meaningful human control.” 

When it comes to meaningful human control, an equally clear definition presents itself. Mr. Moyes outlined four key elements required for meaningful human control:

1. Predictable, reliable, transparent technology;
2. Accurate information for users on the outcome sought, the technology, and context of use;
3. Timely human judgment and action, and potential for timely intervention; and
4. Accountability to a certain standard.

As Article 36’s paper for the session points out, “Whilst consideration of these key elements does not provide immediate answers regarding the form of control that should be considered sufficient or necessary, it provides a framework within which certain normative understandings should start to be articulated, which is vital to an effective response to the challenge posed by autonomous weapons systems.”

A few states have different approaches. The United States defines LAWS as weapon systems that, “once activated, can select and engage targets without further intervention by a human operator,” but which “is designed to allow commanders and operators to excercise appropriate levels of human judgment over the use of force.” It is not clear where it thinks the boundaries would lie in terms of the necessary levels of human judgement for an individual attack to be permissible. The US argued there is no “one size fits all” standard for human control and that “flexible policy standards” are necessary.

This approach raises some serious concerns about flexibility, lack of common standards, and whether or not any control will be applied over individual attacks. These concerns are critical to understanding what it is that states are limited by in terms of their deployment of weapon systems that have the risks and consequences outlined in this article, in presentations of experts, and in many studies and papers over the years.

Switzerland has suggested another approach, which focuses on tasks rather than control. It describes LAWS as “weapons systems that are capable of carrying out tasks governed by IHL [international humanitarian law] in partial or full replacement of a human in the use of force, notably in the targeting cycle.” The Swiss delegation argues that it is premature to draw a line between acceptable and unacceptable systems.

This definition sets up a broad category for consideration of weapon systems that might include existing systems. The approach also emphasises that the term lethal should not be used to limit discussion to anti-personnel systems. This approach is not incompatible with an emerging picture at the CCW in which the requirement for meaningful human control would represent the boundary between a permissible weapon system and one that would be unacceptable. This emerging understanding of LAWS or fully autonomous weapon systems as weapons operating without meaningful human control has potential to greatly simplify debate in the future.

The bottom line

As Article 36 argued in its statement on Tuesday, “ensuring human control over individual attacks—and specifically delineating what is necessary in this regard—is a basic requirement if we are to uphold the structure and effectiveness of IHL as it stands.” Autonomy in weapon systems “poses a fundamental challenge to the body of law that human societies have set out to restrain the use of violent force based on the principle of humanity.” In our work here at the CCW, we have “a choice to recognize and respond to this challenge or to abandon the law as it stands.”

When we talk about autonomous weapons we must acknowledge the broader, moral and ethical concerns; the consequences for war and society of developing weapons that delegate life and death to machines; and the choices we have in preventing a future in which we have abandoned the principles and laws of humanity in favour of mechanised, dehumanised violence.

[PDF] ()