logo_reaching-critical-will

CCW Report, Vol. 6, No. 10

Effectuating our intention
31 August 2018


Ray Acheson | Reaching Critical Will of WILPF

Download full edition in PDF

The states participating in UN talks on autonomous weapon systems (AWS) finally did engage in negotiations on Thursday—but over the draft conclusions and recommendations from the meeting, not on a treaty. “Negotiation” should also be used lightly, perhaps—some delegations were so extensive in their suggestions and comments that they essentially rewrote the document. While it’s fine for states to have a go at outcome documents, it’s a bit frustrating to watch when we know that what the world needs these governments to do is negotiate a legally binding instrument to prohibit AWS.

An additional frustration was the introduction of new vocabulary by the Chair at the eleventh hour. This included terms such as “human agency”—even though so far everyone has been discussing human control; “intelligent autonomous systems”—as opposed to autonomous weapon systems, which is the mandate of this group to address; and a “technology lighthouse mechanism”—which not a single delegation understood, because no one has raised it before in the context of these discussions. These new terms prolonged the discussions over the report as delegations tried to understand what they meant, and then tried to suggest alternatives that reflected the last few years of talks.

Most of Thursday’s discussion continued to reveal major policy and political differences. A more thorough accounting of the debate is contained in the News in Brief section of this report, but the short version is this: the crux of the policy difference lies in the various approaches to human control over weapons, and the implications those perspectives have for informing next steps. It all comes down to how much control people believe they want or need to have over the use of force and over weapon systems: If you think you need to always have meaningful human control over targeting selection and the execution of force, for example, then you’re more likely to want to negotiate a legally binding instrument setting this out so that everyone sticks to the same standard. If you want to have more “flexibility,” i.e. if you want to develop weapons that can kill people without any human operators involved in the selection of a target or in firing upon that target, then you’re more inclined to want discussions to continue—preferably at a slow pace. If you fall into this latter category, you might say that you don’t think there is a common understanding of human control. Or, you might put out your own definition that is completely different from everyone else’s and insist that it’s the only thing you’ll accept.

For example, the United States has argued that the concept of meaningful human control is “divisive”. In reality, it is a term around which the majority of states have coalesced over the past few years. As can be seen in the pages of these reports and the statements by delegations is that most countries want to ensure the retention of meaningful human control over the critical functions of weapon systems, including the selection and engagement of targets. Most also seem to believe that human control is necessary over the design, development, and deployment of weapon systems.

Even the US delegation thinks weapons need some human involvement. But it would prefer to talk about human judgment rather than control. In its working paper to this session of the group of governmental experts, it argues that key thing is to make sure that “machines help effectuate the intention of commanders and the operators of weapons systems” (emphasis added). The paper states that the “appropriate” level of human control or judgment will vary across weapon systems and environments, but the bottom line is, it assumes its weapons will carry out tasks in line with what a commander wants, even without human control over targeting or attack. The US working paper asserts that it can ensure its autonomous weapon systems “function as anticipated” through engineering, testing, and training of operators.

There are a few problems with this approach. One is the issue raised by countless tech workers and scientists that have weighed into the killer robots debate to warn about relying on engineering or programming to be infallible in their performance or reliability. Just today, in an article in The Guardian about the fatal mistakes of self-driving cars, professional coder Ellen Ullman explained: “When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand.”

This is horrifying enough when it comes to self-driving cars. It’s much, much more disturbing in the context of weapons.

A second problem is that the more “flexible” approach to human control is based on the idea that AWS might have benefits for promoting or upholding compliance with international law, as suggested in the Chair’s draft conclusions. The US delegation is in the minority in advancing this perspective, but it is not alone. The Australian delegation is becoming increasingly optimistic about this, for example—perhaps in relation to the growing number of contracts between weapon producers and universities in Australia, or the recent announcements of new investments in the arms manufacturing industry, with the stated goal of becoming one of the world’s top ten arms exporters. Australia asked the Chair to elaborate in the outcome document the references to the possible benefits of AWS—even while several other governments called for the deletion of all such references, noting that the majority of participants do not share this perspective.

A third problem with the “flexible” approach to human control over weapons, and the argument that AWS could have benefits, is that both are based in a particular approach to weapons and to violence that increasingly relies on technology to make war “cleaner” or “more efficient”. But as Elke Schwartz of the International Committee for Robot Arms Control (ICRAC) argued during Thursday’s side event on feminist approaches to AWS, war is a social institution. It is a human problem, not an engineering problem. “Tech-washing,” as she described it, moves us further away from our responsibility over violence.

Technologically-mediated violence brings with it the increasing abstraction, remoteness, and mechanisation of death and destruction. And as so many governments, lawyers, tech workers, scientists, academics, and activists have pointed out over the last few years, this has serious implications for ethics, morality, and international law. Yet during the debate over the conclusions on Thursday, a few delegations tried to suggest that the “expertise” that could be recognised as relevant to these discussions should be limited to military and technical. The delegation of Austria and others pushed back on this, arguing that legal and ethical expertise is also essential to conversations about AWS. But the fact that this even a point of discussion at all serves to highlight another of the issues raised at the event on gender and AWS—namely, that only certain perspectives, those coded as masculine and thus as “rational”—are treated as credible and relevant in spaces such as this one.

This is a serious problem when we’re talking about the development of weapon systems that kill on the basis of “sensors and software,” as the International Committee of the Red Cross describes it. The exclusion of non-military and non-technical voices inevitably means that women, queer people, people of colour, people of lower socioeconomic status, and people with disabilities will be cut out of the conversation, because these are the people that are vastly underrepresented in the technical and military fields. These are also the people that are likely to be the most impacted by the development and use of AWS. And so, once again, straight, white, cisgender, able-bodied, wealthy men will dominate the discussion and determine the outcomes, while the rest of us will suffer the consequences of their decisions.

Diversity is not about political correctness. It is the only way we are ever going to see change in the way that we confront issues of peace and security, of weapons and war. Involving the marginalised and the affected is how we ensure that weapons will comply with international law, by changing the norms and behaviour of the humans that use weapons. The answer is not to give weapons autonomy to kill after they have been programmed with the biases of the most dominant culture in the world, but to change the way we think about and confront war and violence as social institutions.

A more diverse investigation into the arguments about the purported “benefits” of killer robots might, for example, reveal that the real motivation behind the development of AWS is not about better compliance with international humanitarian law. It might reveal that the motivation is actually about perfecting the ability to kill remotely—to keep one’s own human military personnel out of harms way while exterminating an “enemy,” or to repress certain segments of a population without having to deploy human police officers or soldiers. Perhaps these are some of the actual motivations for a more “flexible” approach to human control. But if we only allow the dominant perspectives to be heard or to be taken seriously in these spaces, then we will continue to be fed arguments about robots making war safer for civilians.

There is a long way to go to open up space in the CCW for diverse perspectives and approaches. It seemed about as far away as it could be on Thursday night as the meeting went on for an additional few hours, leaving only the last moments to talk about the recommendations for next steps. A number of governments, including Austria, Brazil, Chile, and Cuba, were uncompromising in their demand for serious action on AWS now. We can’t accept a simple rollover of the group’s mandate, was their main message. “We need to work, with serious urgency, towards something,” said the Chilean delegate.

It remains to be seen what the final version of the recommendation for next steps will say tomorrow, though it sounds like it might include a slightly strengthened mandate to focus on specific outcomes, with the options of a legally binding instrument, political declaration, and enhanced weapon review processes annexed to the mandate. A key lesson of working within the CCW is that there is always a compromise that can be made, but also that it is always made by those who want progress, not by those who want to prevent it. We may not solve this problem on Friday, but we definitely need to decide how much longer we are willing to keep accepting this arrangement—not just in relation to AWS, but for disarmament and for international relations as a whole.    

[PDF] ()