logo_reaching-critical-will

CCW Report, Vol. 7, No. 2

It's time to exercise human control over the CCW
27 March 2019


Ray Acheson

Download full edition in PDF

During the first two days of this current round of UN talks on fully autonomous weapon systems, or killer robots, governments discussed potential military applications of autonomous technologies; characterisations of autonomous weapon systems; potential challenges of these systems for international law; and the “human element” in the use of force. It may sound like a lot for twelve hours of discussion, but as this is the sixth year that diplomats, experts, activists, and academics have engaged in these conversations, it is clear that we know the ground upon which we stand. While a few states continue to insist that, as a group, we still have no idea what we are talking about and need to perhaps spend a few more years agreeing to precise definitions of every possible concept related to autonomous weapon technology, the majority of participants in this process agrees that the lack of consensus definitions does not need to—and indeed, must not—prevent progress in negotiating new law or other mechanisms to prohibit, limit, or regulate autonomous weapons.

Discussions on the ways ahead will be held on Wednesday, but we already know the main tracts for moving forward. The majority of states support the development of new international law that contains prohibitions and regulations of autonomous weapon systems. Of these, 28 governments support a complete ban on the development, possession, and use of these weapons. Some others seek a legal agreement that ensures meaningful human control over critical functions in such systems. A few others, mostly European states, expressed their interest in other mechanisms, such as a political declaration proposed by France and Germany. They envision a declaration to be a good vehicle to outline principles for the development and use of autonomous weapon systems, such as the necessity of human control in the use of force and the importance of human accountability. Some countries have also suggested the development of a code of conduct on the development and use of autonomous weapon systems and/or creating a compendium of “good practices”.

Deflection and denial: attempts to stop progress

So far, only Australia, Israel, Republic of Korea, Russian Federation, and United States have objected to all of these initiatives. In previous sessions of this group of governmental experts (GGE), they have argued that negotiations of a treaty or a political declaration or other mechanisms are “premature”. During the first two days of the current round of discussions, some have tried to distract, obscure, or simply stall discussions. On the opening morning, Russia prevented the beginning of discussions for 45 minutes, demanding that too much time in the programme of work was allotted to the concept of human control over weapons. After several other delegations intervened to support the programme, on which the Chair had held consultations in advance of the meeting, the Chair finally agreed to cut some of the time for the human element to assuage Russia’s repeated insistence. The whole performance showcased the potential for filibustering and stalling that is becoming unfortunately common in “consensus-based forums” like the Convention on Certain Conventional Weapons (CCW). The lesson from twenty years of stalemate in the Conference on Disarmament is, if you don’t want something to happen, just block the adoption of programme of work for as long as possible.

Another common tactic is to try to take discussions in an unproductive direction. Russia and Republic of Korea, for example, suggested that the GGE should spend time discussing the distinction between lethal and non-lethal autonomous weapon systems, or anti-personnel and anti-materiel weapons. That is, between weapons meant to kill people and those meant to destroy inanimate objects like tanks or buildings.

The inclusion of the word “lethal” in the title of this group of government experts was not based on any particular discussion or consideration. It was chosen years ago before these talks even began. Thus it should not be used an excuse to try to limit discussions to weapon systems designed or intended to kill people. Instead, as many other delegations—including Austria, Bulgaria, Estonia, Ireland, Mexico, Peru, Switzerland, and even the United States—argued, the lethality of a system is not based on the technology or intention but the impacts it has on human beings, civilian infrastructure and objects, and the environment. Whether you are killed by a machine designed to kill you or designed to destroy an object, you are still being killed, and you are still being treated as an object. And, if the machine destroys hospitals, schools, markets, homes, water supplies, sanitation facilities, or releases toxic chemicals or harmful substances into the air, water, or land, it can still have lethal effects. In short, trying to distinguish between a lethal and non-lethal weapon is nothing more than a distraction.

During discussions across the range of topics these past two days, a number of delegations also strayed off into the weeds when considering specific characteristics of autonomous technologies that may or may not be able to comply with international law or that maybe should or should not be allowed. Some debated the levels of acceptable autonomy in a weapon system, throwing around terms such as “highly automated functions,” “autonomous combat functionalities,” or “weapon systems using autonomous functions or features,” for example. But, as the International Committee of the Red Cross (ICRC) noted, “automated and autonomous weapon systems are not easily distinguishable from a technical perspective—nor from a humanitarian, legal, or ethical one. The fundamental question is not about the technology itself but about the human control over it.

The element of human control

Once discussions got under way, it became clear that the majority of governments still agree human control is necessary over critical functions of weapon systems, such as those related to selecting and “engaging”—firing upon—targets. Some also think human control is necessary over other parts of a weapon’s “life cycle,” such as design and development. Some delegations are more specific than others about the characteristics they think makes a weapon system under sufficient or meaningful human control at various stages of operation. But the bottom line is that other than Australia and Russia, it seems that most countries agree, as Japan’s working paper notes, that “there is an effective consensus that meaningful human control is essential.” 

Australia appears to have a different and broader view of control, and thus has dismissed the usefulness of the term “human control” for the GGE. Its working paper seems to suggest that if the weapon is designed, developed, trained, tested, certified, assessed, and deployed by humans, and if the weapon will operate within specific rules of engagement and targeting directives, then these “controls” are sufficient. Russia’s rejection of the term as a principle for action in the CCW seems based on its belief that “specific forms and methods of human control should remain at the discretion of States.” Israel also articulated this position during Tuesday afternoon’s discussion. 

It is likely that the distancing from the concept or term of human control is part of these governments’ wider rejection of engaging in any kind of process to regulate or prohibit autonomous weapon systems. The delegations that have so far successfully prevented concrete progress through the CCW have repeatedly suggested that weapons operating without human control are not necessarily a bad thing. The United States, for example, said on Tuesday that it does not believe that autonomous functions in weapon systems inherently pose problems. It argued that some functions are better performed by machines and that in some cases, less human involvement at the moment when force is deployed would be more effective. The United Kingdom argued that autonomous weapon systems are more likely to make more accurate decisions than humans, have better situational awareness, use a lower yield of force, and offer higher precision in urban environments. Similarly, Russia argued that autonomous weapons could facilitate compliance with international humanitarian law and “minimise the consequences of human failings”.

These are common arguments in favour of autonomous weapon systems, but they are not compelling ones. They are based in a faith in technology to “solve” social, human problems of violence and war. In reality, the emergence and reliance on machines to fight battles or police populations will be much more likely to lead to more violence, more suffering, and more conflict. These weapons will not make “war safer for humans”. They will likely exacerbate existing problems with weapons and with the use of force. For example, a few governments celebrate the speed with which autonomous weapon systems would likely be able to make decisions on the battlefield, arguing that this will result in better precision and accuracy. On the contrary, as tech worker Liz O’Sullivan explained during Tuesday’s side event hosted by the Campaign to Stop Killer Robots, the speed of autonomous decision making means it will be impossible for humans to predict or respond to these decisions in real time. Similarly, when it comes to “improved situational awareness,” we have already seen the failures of remote weapons technology, such as armed drones, to deliver accurate information to operators, misinterpretation of data, or bias in targeting procedures that lead to the execution of civilians and the destruction of civilian infrastructure on the basis of the so-called target’s sex, age, location, clothing, or perceived activities. With autonomous weapon systems, this type of bias could be programmed right into killing machines, resulting in the death of human beings on the basis also of the colour of their skin, their sexual or gender identity, their disabilities, or other physical features included in data sets for targeting algorithms. Meanwhile, the gathering of this “situational awareness” or “signature strike” data through surveillance results in violations of the human rights to privacy and dignity.

A few countries, including Austria, Canada, and Ireland, have raised concerns about this type of bias in autonomous weapon systems; it is a central concern of the Women’s International League for Peace and Freedom (WILPF) and some other members of the Campaign to Stop Killer Robots. WILPF is also concerned with some of the other destabilising effects that autonomous weapons would be likely to have on gender equality and gender-based violence, racial justice, and human rights more broadly. This is also why we believe diversity in these discussions is vital to preventing autonomous weapons, and we welcome the European Union’s call for gender diversity in CCW discussions. Broader participation of countries from the global south is also imperative, as WILPF Cameroon member Guy Blaise Feugap explains later in this edition of the CCW Report, as is the participation of those likely to experience repression and violence from the use of these weapons, such as people of colour, First Nations, human rights defenders, and environmental activists.

The importance of human control over the use of force, in a world that already sees so much violence, war, oppression, and inequality, cannot be overstated. As the UN Secretary-General reiterated on Monday, “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.” If we fail to take this opportunity to prohibit these weapons now, the human community will embark on a dark path towards automated violence. Diplomatic proceedings have not yet fallen irrevocably behind the development of these weapons. But that moment will come, soon. Thus states must, during the rest of this week, firmly propose that we now turn to negotiations of a treaty banning autonomous weapons and ensuring meaningful human control over the use of force. Delegations should use exercise some human control over the CCW and use the remaining time of this GGE to table concrete proposals for work on new international law. As Brazil said on Tuesday, “We need to move toward structured negotiations, lest we lose the work accomplished so far.”    

[PDF] ()