logo_reaching-critical-will

CCW Report, Vol. 7, No. 7

While a few countries control the CCW, we risk losing control over weapons
22 August 2019


Ray Acheson | Reaching Critical Will of WILPF

Download full edition in PDF

Wednesday’s meeting on autonomous weapons, the final for this session of the Convention on Certain Conventional Weapons (CCW) group of governmental experts, lasted until 03:08 on Thursday. The two day meeting resulted in the adoption of a final report, but this report’s disappointing content is only matched by its unambitious process for the way ahead. If we are to have any chance at preventing the development and use of weapons that kill without human control, we need to seriously—and urgently—step up our game.

The bottom line

While the CCW’s meeting of high contracting parties in November 2019 will have the final say, the group’s report indicates that it will meet in 2020 and 2021 to continue discussions on autonomous weapon-related issues, after which it might consider or develop “aspects of the normative and operational framework” on emerging technologies in the area of autonomous weapons. The group will report its recommendations back to the CCW high contracting parties in 2020 and 2021. From there, it is unclear in what direction things might go.

Two decisions have been pushed to the meeting of high contracting parties. These include the number of days that the GGE will meet over the next two years—20? 25? 30?—and the wording around what states might do with the outcomes of those meetings over the next two years—will they develop aspects of a framework, or consider aspects of one, or … nothing?

These are the practical aspects of the decisions and non-decisions made last night. But what’s equally important as what is in the report (and what has been left to decide until November) is what is not in the report.

The final report does not refer to human control. It does refer to ethics and international law, which were both contested at various points by one or two states, but it does not mention human rights or human dignity. It does mention possible bias in data sets used in algorithmic-based programming, but the final report does not include previous language highlighting that such bias “may diminish, perpetuate or amplify social biases, including gender and racial bias.”  

The final report also does not include any guidance for the three streams of work identified by the group as needing further consideration, including legal, technological, and military aspects of autonomous weapons. Previous iterations of the text outlined possible subjects to be explored within each of these work streams, but that was stripped from the document sometime Tuesday night.

Overall, what the report does not offer is any sense that this process at the CCW is going anywhere. It has recommended two more years of talking. It has not set out any definitive direction for work after those two years, setting the stage for Russia, the United States, and the handful of other pro-killer robot governments to continue to block restrictions or prohibitions on the development of this technology. The report does not categorically preclude meaningful work from taking place, but by excluding the most important concept for these discussions—the concept of human control—the report has shown as clearly as ever that the interests of the most militarily powerful governments in the world will continue to override the interests of the rest of the world. It also demonstrates once again that for the most part, governments are not acting with the urgency necessary to prevent a disastrously violent future in which machines are programmed to kill people based on sensors and software. This is a future without meaningful human control over the use of force, most likely resulting in more war, more violence, more repression, more death and destruction.

The key issues

Over the course of the past two and a half days of discussion, participating states had plenty of disagreements. But the most important continue to surround the concept of human control over the use of force and over weapons, and characteristics of autonomous weapon systems.

The United States, along with most vocally the France, Russia, and United Kingdom, objected to any and all references to human control. In the context of the final report in particular, they did not want it included in the new “guiding principle” that the GGE identified (adding to the ten guiding principles they agreed upon in 2018). These delegations “allowed” references to “consideration of the human element” to appear in the final report but rejected the term human control itself. In the early hours of Thursday, the US delegation insisted it would be open to engaging in discussions on human control in the future, which Austria, Germany, Brazil, Chile, and others said means they expect this will be a key focus of the GGE’s deliberations next year. Regardless of this late-night offer, the US delegation continues to say it finds the concept of human control extremely problematic and has been consistent in rejecting the term throughout the work of this body.

The reason for these governments to oppose the concept of human control and other key issues related to autonomous weapons is because they are already “investing significant funds to develop weapons systems with decreasing human control over the critical functions of selecting and engaging targets,” as noted by the Campaign to Stop Killer Robots. Just a week ago, the UK government put out a call for proposals to develop “the capability of unmanned autonomous military systems to operate in challenging environments.” In the United States, Pentagon officials have made it clear that they are pursuing the development of weapons that operate “without having a person following through on it”.

Perhaps most interesting is France’s rejection of the term human control at the GGE, particularly in light of the French Minister of the Armed Forces’ comments earlier this year that “France refuses to entrust the decision of life or death to a machine that would act fully autonomously and escape any form of human control.” If a state’s Minister of the Armed Forces is using the term human control, and favourably, it seems strange for its UN ambassador to say the term is unacceptable in a report from a meeting about autonomous weapons.

The rejection of the term human control, and also Russia’s rather surreal insistence that autonomy is not a central characteristic of autonomous weapon systems, points to the crux of the problem with international work on this issue so far: a handful of countries want to develop these weapons. They are already developing these weapons. They are determined to continue in that endeavour, and thus are stalling, thwarting, or hijacking the intergovernmental process that might seek to restrict, limit, or prohibit the development of these weapons. The easiest way to prevent progress in this regard is to not even allow discussion over the most relevant concepts: if we cannot talk about human control, if we cannot agree that autonomy is critical to autonomous weapons, if we cannot develop any concrete outcomes from the work of the GGEs, then work on autonomous weapons can continue unfettered. These same countries tried to prevent and shut down discussions about the humanitarian impact of nuclear weapons because they knew it would lead to a prohibition of these weapons—which it did. They tried to prevent discussion over the indiscriminate effects and human suffering caused by antipersonnel landmines and cluster bombs because they knew it would lead to a prohibition of these weapons—which it did. And so the game is once again afoot: they are trying to prevent discussion of the crucial aspects of autonomous weapons in order to try to prevent a prohibition of these weapons.

This brings us to:

The problem with the process

The problem with the CCW process on this issue (or any other) is ultimately consensus. Each and every state has an absolute veto over each and every decision. This means that even though the vast majority of governments support the need for meaningful human control over weapon systems, one country can prevent this from being reflected in the report of the meeting.

But there is another, arguably even more insidious problem with the process, and that is bias. There is a bias towards the “military powers” in that their interests and positions are generally taken more seriously than those of other countries. This can be seen, felt, and heard in conference rooms—when they speak, everyone listens intently, and the facilitator or chair generally tries to find ways to accommodate their views. A good example from this most recent GGE session was that when Russia insisted on changing words in every single paragraph of the draft conclusions and recommendations, the document was projected on a screen and track changes were employed to capture the proposals. In contrast, when Ireland and South Africa asked why the useful language on algorithmic bias was removed from the document—even though no one had objected to its inclusion—their question was dismissed without any interaction or consideration.

There is also bias privileging western states over the rest of the world. This can be seen in how western countries’ positions are better reflected and defended in draft texts. For example, in this round of negotiations over the final report, both human control and legal weapon reviews came under attack. But the Chair argued that weapon reviews had lots of support and had undergone a lot of discussion and thus must stay in the document. The centrality of weapon reviews has been posited mostly by western European countries. Human control, meanwhile, has arguably had even less divergence of views, in that the vast majority support some formulation of the concept. Yet it was removed early from the draft text and despite valiant efforts from the likes of Austria, Brazil, Chile, Costa Rica, Ecuador, and South Africa, it was never reinserted.

Consensus and bias are thus serious problems preventing the development of international law on autonomous weapon systems. These are not unique to the CCW, but they manifest dangerously in this body. The mandate of the CCW is to protect human beings from “excessive suffering” from the means and methods of war. But its processes privilege the countries that prioritise their ability to wage war and to develop new means and methods of war over the lives and dignity of human beings. And the challenges of consensus and bias within the CCW mean that a couple of states can control the international process while we collectively lose control over weapons.

Where to go from here

This is why the Campaign to Stop Killer Robots is working with governments, tech workers, academics, scientists, students, and anyone else who actually wants to prevent human suffering. After Jordan indicated its support for a prohibition on killer robots at this GGE session, 29 states have supported this position. Many others support the negotiation of some kind of new international law restricting or prohibiting these weapons. If the CCW wants to lock itself into two more years of chats that lead us straight back into a Russian-American black hole of late-night report drafting sessions instead of negotiating treaties or other concrete outcomes, that’s fine. Governments that are serious about preventing the development and use of machines that kill without human control can do this work elsewhere, just as they did to prohibit nuclear weapons, cluster bombs, and landmines; as they did to put limits on the international arms trade; as they are about to do to stop the use of explosive weapons in populated areas.

The CCW will meet again on this issue, and we will be there to hold governments to account—but in the meantime, we are also working closely with those who are responsible for designing and building the technology that is necessary for autonomous weapons to try to prevent the weaponisation of their products. We are working with experts and activists across many fields to help develop better understandings of the risks and problems with pursuing these weapons. We are working with parliamentarians, politicians, and diplomats to build the case and the capacity to take real action now, before it’s too late.

A future of artificial intelligence-enabled killing machines, of targeting profiles designed to eliminate people of colour or of a certain sex or identity, and of life and death decisions executed by algorithms and computer chips, is not inevitable. The development of much of this technology is underway, and we know that a handful of governments are racing against the UN clock to deploy these systems before they are prohibited. But this does not mean those on the side of humanity have lost. It means, as it always does, that we are up against a vast complex of economic, political, and military power. We confront this system time and again, because the alternative is giving up and giving in. And if we are to survive, as human beings with dignity and morality, that is not an option.

[PDF] ()