CCW Report, Vol. 9, No. 3
Editorial: Convergence against killer robots
8 August 2021
Ray Acheson | Women's International League for Peace and Freedom
Since the Convention on Certain Conventional Weapons (CCW) began its work on autonomous weapons in 2014, a lot has changed. Technological developments have proliferated rapidly—in terms of autonomy in weapon systems, but also in terms of related technologies like facial and voice recognition, algorithm-based predictive tools, and surveillance, all of which have been shown to produce gendered and racialised harms. At the same time, government positions on these weapons have developed. While a handful of countries remain committed to developing and deploying weapons with increasingly autonomous functions, most governments are very clearly coalescing around the need for prohibitions and regulations over this technology.
Consensus versus convergence
It can be difficult to evaluate the degree of convergence on these issues, as CCW meetings are often dominated by the most militarily active states in the world. Yet despite the excessive volume of interventions by the few states that want autonomous weapons, the concerned majority have been able to articulate their demands for restrictions. Ultimately, it has become clear that the majority of the world is converging on the belief that these weapons should be regulated.
The only thing constraining progress is the problematic interpretation of “consensus” as requiring unanimity. A plague across multiple UN forums, consensus has become the enemy of convergence and coalescence. Rather than treating consensus as a process—as a tool that is useful to consolidate perspectives and achieve outcomes that suit the interests of the majority—these processes instead treat consensus as a veto, only ever resulting in the lowest common denominator outcome, or no outcome at all.
As the first week of the group of governmental experts (GGE) wore on, it became commonplace to hear Israel, Russia, or the United States expound upon the essential nature of the consensus outcomes reached in previous GGE sessions. These agreements, such as the 11 guiding principles, are useful insofar as they represent the very minimum of what can be considered international standards on autonomy in weapon systems. They are the product of deliberate efforts to water down or reduce commitments to the barest possible bones to garner acceptance by the countries that want to build autonomous weapons. These principles are not what most participants of this process feel is the best possible outcome; they are markers of what a handful of governments, guided by the profit-seeking of their military-industrial complexes and the power-seeking of their political leaders, have been willing to accept so far.
The past work of the GGE, including the adoption of these principles, has been useful to help shape understandings of the technologies under discussion. But this is by no means the best we can do and must not limit or preclude further action that most countries, international organisations, scientists and tech workers, and civil society groups believe is urgently necessary to prevent human suffering, violations of human rights, and erosion of existing international law. This is why, during last week’s meeting, The Non-Aligned Movement, Algeria, Argentina, Austria, Brazil, Chile, China, Costa Rica, Ecuador, El Salvador, Iraq, Ireland, Malta, Mexico, New Zealand, Pakistan, Palestine, Panama, Peru, Philippines, Sierra Leone, South Africa, Switzerland, Uruguay, International Committee of the Red Cross (ICRC), and Campaign to Stop Killer Robots (CSKR) called for the negotiation of a legally binding instrument on autonomous weapon systems. Additional delegations have called for negotiations of such an instrument at previous sessions of the GGE.
Brazil, Chile, and Mexico have tabled a proposal outlining a possible instrument, as have Argentina, Costa Rica, El Salvador, Palestine, Panama, Philippines, Sierra Leone, and Uruguay. Austria, Brazil, Chile, Ireland, Luxembourg, Mexico, and New Zealand have also put forward elements for an operational and normative framework, as have France and Germany. It is clear that most states want to take further action against autonomous weapons and are ready to do the work for it. (See the “possible options” report in this edition for more details.)
“Automatic prohibition” versus automatic violence
The consensus versus convergence dynamic sets up several other dichotomies at play in the GGE. Russia’s concern with an “automatic prohibition” of weapons operating with autonomy, for example, contrasts with the concern of the vast majority of participants about creating weapons that can exercise automatic violence, in particular against human beings.
Israel and the United States argued that autonomy in weapon systems isn’t about delegating life and death decisions to machines. But almost every other delegation expressed concern with the automation of violence. The ICRC articulately explained that death by algorithm entails an ethically problematic change in the exercise of human agency and the use of force—a change that is dehumanising and runs contrary to the principle of humanity. This is why the ICRC, the CSKR, and a growing number of states—including Argentina, Costa Rica, El Salvador, Palestine, Panama, Peru, Philippines, Sierra Leone, and Uruguay—have called for the prohibition of any weapon system that uses target profiles for human beings.
The fact that certain states are more concerned with having restrictions on their ability to develop weapons than they are with preventing harm to human beings is the core of the problem confronting disarmament and arms control efforts, as well as efforts to protect civilians and safeguard humanity. As the CSKR asked this past week, are we really comfortable with delegating more and more of our human functions to machines? Where do we draw the line, if we’re not willing to draw it in relation to decisions about who lives and who dies, or more broadly, over how violence is carried out? Chile flagged that we’re facing a scenario where a human being is just a spectator of the violence carried out in their name. How can we accept this, rather than accepting limitations on weapons development?
Fetishising technology versus protecting human life
We’ve already gone down this path of distancing humans from the violence they undertake, of course, with the use of armed drones and other remote warfare technologies. Yet despite the harms caused by these technologies—including to civilians and civilian objects, and the ways these weapons have undermined international law and lowered the threshold for the use of force, and the ways in which they have disproportionately been used and tested against populations in the global south—a handful of states still fetishise this technology and the creation of even more autonomous violence.
Australia, Japan, India, Israel, Russia, and the United States in particular spoke at length about how wonderful autonomous weapons will be, how they’ll save lives and minimise accidents. Yet, as we pointed out during the informal consultations last month, this is not why these countries want to develop autonomous weapons. Autonomous weapons aren’t about saving lives; they are about projecting power and deploying increasingly unlimited methods of violence.
The states in favour of autonomous weapons argued that technological progress is a good thing and warned that prohibiting autonomy in weapon systems will hamper this development. India and a few others warned against “demonising” technologies and said we “shouldn’t be afraid of evolution”. France and Russia likewise cautioned against categorising technology as either good or bad, with France asserting that the trick is just to figure out how we can use autonomy in weapon systems in the “right way,” to reduce or eliminate harm to civilians.
But as those who study armed conflict and the failure of parties to conflict to protect civilians have said time and again, we can’t reduce human suffering by engaging in war or deploying weapons. “The idea that wars can be fought in a more humane and less violent manner has the paradoxical effect of hiding much of the pain and suffering caused,” note scholars Alex Edney-Browne and Thomas Gregory. The only way to reduce harm is to stop developing new means and methods of warfare—and to stop engaging in war. To redirect resources from weapons to the well-being of people and planet. To build up relationships of collaboration instead of competition.
Radical versus rational
Unfortunately, the militarily active states seeking autonomous weapons do not support this approach. Their ability to exercise violence is of vital importance to their projection of power in the world, and anything that might limit that power is perceived as “radical”. Reminiscent of Russia’s comments a few years ago that support for banning nuclear weapons was the stuff of “radical dreamers” who have “shot off to some other planet or outer space,” Israel argued that supporting a mandate to negotiate a legally binding instrument on autonomous weapons is a “radical path”.
However, one of the core definitions of radical is “going to the root or the source”. In this context, banning autonomous weapons can be seen as going to source of the problem—the problem being the increasing abstraction, mechanisation, and automation of violence and oppression in our world, as it is exercised by multiple structures of power to impose and maintain inequality.
Radical also means a departure from the usual or customary. This, too, is exactly what we need. Where have decades of investment in weapons and war gotten us? The world is literally on fire, or flooding, depending on where you live. Billions of dollars are spent on weapons while people in most countries are struggling to stay housed and fed during a global pandemic. We need a radical departure from business as usual, or we are not going to survive.
But also, as Palestine noted, prohibiting autonomous weapons is not a radical path in the sense that Israel means it—as a slur suggesting that it is extreme or ill-conceived. Preventing the development of these weapons is not radical, it’s rational. Palestine pointed out that after all these years of deliberations and exchanges with experts in the CCW, we can’t say that we need to extend discussions indefinitely. We know the risks. We can see what’s coming, based on the lessons of history and the current use of relevant technologies by police, militaries, and other violent institutions. We know what we need to do to safeguard human life and dignity and all the norms and principles we’ve collectively built.
Reactive versus proactive
Which brings us to our final dichotomy. Our current situation with autonomous weapons is that we have a unique opportunity to be proactive, instead of reactive. As Palestine, Portugal, and Ireland noted, disarmament and arms control efforts almost always have to react to what happens, but we have the chance to prevent harm rather than simply respond to it.
This weekend marked 76 years since the United States dropped nuclear bombs on Hiroshima and Nagasaki. Since then, the world has experienced extraordinary suffering from nuclear violence, from uranium mining to bomb production and weapon testing to radioactive waste storage, as well as the waste of financial resources and loss of human lives on the maintenance and modernisation of nuclear weapons and on the conflicts waged and tensions built in the name of preventing proliferation.
One wonders what would have happened if we had the chance to prevent all of this. What if, before the atomic bomb was developed and used, the international community had the foresight and the opportunity to prohibit it? The harm that would have been spared by a preemptive prohibition is incalculable—especially the harm to Indigenous communities, to the land and water of our planet and many populations, to workers and soldiers and civilians exposed to radiation from various points in the nuclear chain.
We cannot undo this harm. We can only work now to prevent future harm by universalising the Treaty on the Prohibition of Nuclear Weapons and achieving the elimination of nuclear weapon programmes. But we do have a chance to prevent harm from new technologies of violence. We do not have to build autonomous weapons. We do not have to build any new weapons. Weapons development is a choice, not a necessity.
And to those who think this is a radical proposition, don’t forget: we did ban nuclear weapons.