logo_reaching-critical-will

CCW Report, Vol. 3, No. 6

UN agrees to more talks on autonomous weapons as support for prohibition grows


Ray Acheson
18 April 2016 

Download full edition in PDF

At the end of last week’s UN discussions on lethal autonomous weapon systems (LAWS), states agreed on recommendations for further, more formal deliberations next year through an open-ended Group of Governmental Experts (GGE). This body, if states accept the recommendation at the Convention on Certain Conventional Weapons (CCW) Review Conference in December 2016, would operate for a currently undetermined length of time in 2017 and might continue through 2018. While the establishment of a GGE is a welcome step, the recommendations adopted last week only call for the GGE to “explore and agree on possible recommendations on options related to emerging technologies in the area of LAWS.” This is an unambitious mandate that does not reflect either the pace of technological development nor the urgency of ensuring that meaningful human control is retained over weapon systems and the use of force.

The need for meaningful human control, as well as the ethical and moral questions around relinquishing control over the use of violent force to machines, has remained at the heart of debate over the past three years. Fourteen states, thousands of scientists, two UN special rapporteurs, and the Campaign to Stop Killer Robots are all urging the negotiation of a legally-binding instrument to prevent the development, deployment, and use of LAWS. States concerned with the challenges raised by these systems should work with urgency toward negotiations on such an instrument. 

Landscape of governmental positions

After three one-week informal meetings on this issue, most states appear to support the retention of human control over weapon systems. Yet it is clear that some states—mostly higher-income, militarily and technologically advanced states—wish to slow (or even prevent) diplomatic progress toward restricting, regulating, or prohibiting the autonomous use of force by machines.

Some have tried to define autonomous weapons in a way so futuristic as to be implausible and never to affect any weapon developments they might contemplate. States such as Israel, Japan, Russia, and the United Kingdom have argued that LAWS are a possibility of the distant future and may never exist at all. But the existence of technologies such as Israel’s Harpy drone or the United Kingdom’s Taranis system indicate that the development of LAWS is not really so distant. There are gaps in our knowledge here of course and even more so in other countries that are less transparent about their arms industries.

The United States and some others have spoken favourably about the possibility of increasing autonomy in weapon systems, citing perceived benefits in precision and the reduction of civilian causalities. Others still, such as China and Russia, have spent these meetings primarily reacting to the positions of other states and interrogating their motivations, without offering any information about their own positions or perspectives on how to best deal with LAWS or what related technology they might already be contemplating, developing, or deploying in this regard.

However, the majority of the 67 states that have spoken publicly in multilateral forums about LAWS have voiced concern with their development and deployment, albeit from varying perspectives and with varying thoughts about solutions. These states have articulated a number of problems or challenges arising from LAWS, such as the difficulties of programming compliance with international law into machines; possible violations of human rights and dignity; transfer of risk of the consequences of warfare away from the deploying forces’ soldiers to civilians; technical challenges such as imprecision (as seen with armed drones); lack of capacity for due process in prosecuting targets; expansion of the battlefield; lowering the threshold for warfare (and use of force within warfare); and overall fuelling of militarism.

Regardless of which aspects of LAWS trouble any given state the most, the majority of those concerned have expressed the belief that some form of human control over weapon systems is necessary.

Autonomy and human control 

Very few states have proposed a definition of LAWS. The United States defines LAWS as weapon systems that, “once activated, can select and engage targets without further intervention by a human operator,” but which “is designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” It is not clear where it thinks the boundaries would lie in terms of the necessary levels of human judgment for an individual attack to be permissible. The US argued there is no “one size fits all” standard for human control and that “flexible policy standards” are necessary.

Switzerland’s approach focuses on tasks rather than control. It describes LAWS as “weapons systems that are capable of carrying out tasks governed by IHL [international humanitarian law] in partial or full replacement of a human in the use of force, notably in the targeting cycle.” The Swiss delegation argues that it is premature to draw a line between acceptable and unacceptable systems.

As presenter Wendell Wallach of the Yale Interdisciplinary Center for Bioethics argued during this week’s discussions, defining LAWS does not have to be a complicated undertaking. Richard Moyes of Article 36 has suggested a straightforward formulation in which LAWS are “weapon systems with elements of autonomy operating without meaningful human control.” Some states and NGOs, including Switzerland and Amnesty International, have also noted that any definition should not be limited to weapon systems that are designed or used to operate with lethal force, in order to take into account attacks against objects and use of LAWS in law enforcement.

When it comes to defining human control, states have some work to do. But the NGO Article 36 has suggested that key elements of meaningful human control would include predictable and reliable technology; accurate information on objectives and context of the use of weapons; timely human judgment and action over the functions of weapons; and a framework for accountability.

There is an emerging picture at the CCW in which the requirement of human control over the critical functions of a weapon system would represent the boundary between a permissible weapon system and one that would be unacceptable. Requiring meaningful human control over critical functions such as target selection and munitions release in individual attacks appears to be the most straightforward, credible, and clear approach. It is the only way to meet the legal imperatives of respecting international law and of holding an individual responsible for unlawful acts. It is also the only way to ensure respect for ethics and morality in relation to preventing fully mechanised violence in which machines take human life without any human intervention.

Prohibiting autonomous weapons

The need for meaningful human control over individual attacks is the basis for a prohibition on autonomous weapons systems. Calls for this prohibition are growing. As noted above, fourteen states, thousands of scientists, two UN special rapporteurs, and the Campaign to Stop Killer Robots are all urging the negotiation of a legally-binding instrument to prevent the development, deployment, and use of LAWS.

Some states have suggested they believe all weapons should have meaningful human control, but reject the development of new rules in this direction. As discussions seem set to continue next year in a more formal setting, states should use the time to determine their positions about the levels of dehumanisation of war and violence they want to pursue or prevent.

When we talk about autonomous weapons we are talking about the development of new ways to kill each other—ways that ultimately reduce our own involvement as human beings in that killing. With autonomous weapons, we would abdicate responsibility and accountability for killing by removing our moral agency from that killing, setting the stage for a range of highly problematic challenges to law, ethics, and morality, as well as to the nature of war and violence.

Autonomy in weapon systems “poses a fundamental challenge to the body of law that human societies have set out to restrain the use of violent force based on the principles of humanity,” the NGO Article 36 has argued. In our work here at the CCW, we have “a choice to recognize and respond to this challenge or to abandon the law as it stands.” In the time between this meeting and the Group of Governmental Experts next year, states should consider this choice and prepare for concrete action. Our human principles determine our actions, and our actions determine our identities and our futures.

As the philosopher Simone Weil argued in the mid-20th century, we need to examine the social relations implied by our instruments of violence and war, not just the ends pursued by war. Do we want to seek a future in which the violence we exercise against each other is further mechanised and dehumanised, or do we want to pursue a future in which we are cooperating as human society to prevent suffering and promote peace and justice? This is a key question for us all as we continue our work on LAWS.

[PDF] ()