logo_reaching-critical-will

CCW Report, Vol. 9, No. 1

Editorial: The imperative of preventing autonomous violence
13 July 2021


By Ray Acheson

Download full edition in PDF

From 28 June to 2 July, the Chair of the Group of Governmental Experts (GGE) on “lethal autonomous weapon systems” convened a series of informal consultations to discuss written submissions that participants of the GGE have made over the past few months. A formal session of the GGE could not be held due to the objections of Russia, which over the past year has refused to permit meetings to be held that are not fully in-person. Instead, the informal consultations were held online for a few hours each day, where participants discussed recommendations for a “normative and operational framework” on autonomous weapons. It is the mandate of the current GGE to “explore and agree on possible recommendations on options related to emerging technologies in the area of lethal autonomous weapons systems,” which it is to present to the next Convention on Certain Conventional Weapons (CCW) Review Conference, scheduled for December 2021.

The CCW has been discussing autonomous weapons since 2014, building on earlier deliberations in the Human Rights Council. When in 2019 the CCW adopted this rather strained agreement to “clarify, consider, and develop” aspects of a normative and operational framework over the following two years, most activists and many diplomats were disappointed with the lack of ambition and urgency. Technological developments leading to increasing autonomy in weapon systems has continued rapidly, with systems of varying autonomy already being developed and even deployed. Meanwhile, the COVID-19 pandemic and government intransigence have prevented meaningful progress for international regulations or prohibition of such systems.

It has for a long time felt as if discussions on autonomous weapons are taking place in a vacuum, where the reality of tech development and use—and the harms being inflicted by weapons and warfare more broadly—has little or no bearing on intergovernmental discussions. At the core of the GGE conversations lies an apparent assumption that autonomous weapons will be developed and used. The focus of most participants is either on what should be done to limit the damage they will inevitably inflict, or how to ensure as few limitations as possible to allow “freedom of action” on the battlefield—the priority interest of the usual suspects of the United States, Russia, Israel, and other heavily militarised countries.

From the outset, most civil society groups—especially those affiliated with the Campaign to Stop Killer Robots—have opposed the development of autonomous weapons. An increasing number of governments are also calling for some kind of legal instrument to prohibit and/or regulate certain kinds of weapons, certain levels of autotomy in weapons, and/or autonomy in certain functions of a weapon system. This is very welcome, and absolutely necessary. Yet even as calls for legal action accelerate, there is still a sense of inevitability of the application of emerging technologies like artificial intelligence and algorithms to weapons, and an overall lack of urgency to prevent this.

Context is key

It’s particularly difficult to observe these conversations while a virus that has killed millions of people around the world and sent economies into tailspins continues its rampage amidst a global vaccine apartheid. While heatwaves kill people and burn entire forests, towns, and ecosystems to the ground—heatwaves resulting from unending investments in fossil fuels and blind faith in “technological fixes” to the climate crisis, coupled with a refusal to change the way we extract and use resources and pollute our planet. While the depth and breadth of police brutality against Black and other people of colour and genocidal intentions against Indigenous populations become ever more obvious and less and less deniable.

Instead of confronting these challenges—of economic and social inequality; of displacement from colonialism, capitalism, conflict, and climate change; of racism, sexism, and other intersectional oppressions; of the climate crisis and resource extraction and environmental devastation—we are instead talking about building machines that will be programmed with sensors and software to kill and destroy.

It’s impossible not to consider autonomous weapons in this broader context. In fact, autonomous weapons are instrumental within this broader context. They are not being created out of benevolent intention of “improving precision” of weapons to better protect civilians from harm. If that was really the objective of the countries defending the development of autonomous weapons, they would stop using explosive weapons in populated areas, they would stop stockpiling and modernising nuclear weapons, they would stop selling weapons to zones of conflict, they would stop building their economies around weapons production and trade and use.

No, civilian well-being is not the objective of the governments talking about how valuable autonomous weapons will be to saving lives. When we look at the context within which these weapons and related technologies are being developed, we can see them as direct responses to ever-growing inequality, oppression, brutality, and extraction. These weapons are for maintaining the privilege and power of some over the majority—over people on the move, displaced by violence, poverty, and environmental degradation; over people that will be incarcerated and caged instead of provided for through investments in social and economic well-being; over anyone that the possessors of these weapons determine to be threats to their power, or whomever they determine to be simply expendable.

As WILPF, Amnesty International, Article 36, the International Committee for Robot Arms Control, and other civil society groups have pointed out, the development of autonomous weapon systems must also be considered in the context of the development of other emerging autonomous and artificial intelligence (AI) technologies, which constitute some of the main building blocks of an autonomous weapon but are finding use in other contexts as well. Biometric data collection; facial, voice, gait, and cardiac recognition; predictive policing software; tools of surveillance; mechanisms to categorise and sort human beings—all are increasingly being used by militaries and police globally. They are being used in drone strike operations, for border and immigration “enforcement,” and to predict “crime” and arrest “criminals”. We can see how, time and again, governments, militaries, and police forces use advanced technologies for violence and control. We can see the trajectory of these developments and the world they are actively constructing.

“Weapons for peace”

A handful of governments are committed to this path. Australia, France, India, Israel, Russia, Turkey, United States, and a few others, in their interventions to the consultations or in written submissions, have made it clear they support the development of autonomous weapons and as few limitations as possible on them. These formulations take different shapes, but generally they reject the prohibition of all but some extreme forms autonomy—like weapons designing themselves (!)—and prioritise the alleged “benefits” of autonomous weapons over their very real risks.

As if the GGE’s acronym for this topic—LAWS, for lethal autonomous weapon systems—were not unfortunate enough, France has now suggested to break this down into two categories—FLAWS, for fully lethal autonomous weapon systems (the name speaks for itself); and PALWS, partially autonomous lethal weapon systems. France and others agree that certain systems—FLAWS—should be prohibited. But they define these weapons as basically those that would not operate with any human interaction at all, and weapons that might be built by other autonomous machines. Everything else would be considered PALWS, and while some of these systems may require regulation, they are not only generally permissible but even considered welcome advancements to military precision and the protection of civilians.

This assumption that technology will solve the problem of people being harmed from the use of weapons is highly problematic. Whatever system we are talking about, we are talking about a weapon system. Weapons are designed to harm—either to kill or incapacitate human beings or to destroy infrastructure, or in many cases, both simultaneously. Talking about designing weapons to save lives is like talking about designing new ways to extract fossil fuels to stop climate change. Of course, this is pursued, and this is exactly why we have wildfires and droughts and flooding at the scale we are currently facing. Weapons cannot save lives. Only policies that prevent the use of weapons—that prevent conflict, that centre demilitarisation and disarmament, and promote cooperation and solidarity instead of tension and violence—can save lives.

If we look carefully at the formulations proposed by the governments supporting autonomous weapons, it’s clear that their framework is intended to be as permissive as possible to the development and use of whatever technologies of violence they, at the national level, decide they want in their arsenals. While they contend that all weapons will of course be designed by human beings, as if this is some great concession, they make it sound fanciful that there could be any other option. But once you’re on the path of increasingly permissive and expansive delegations of tasks to machines, how long can you hold any particular line?

A better approach—from a legal, political, technical, security, and moral standpoint—is to draw a clear line now around what is unacceptable and to prohibit autonomous weapon systems outright. Rather than disingenuous and never-ending conversations about what might or might not constitute autonomy or what level of human control is necessary for it to be considered either human or control, we should just stop the pursuit of autonomous weapons now.

Prohibiting autonomous targeting of and attacks on human beings

We should, first off, prohibit autonomy in antipersonnel weapons. Anything that will target human beings will programme human beings into ones and zeros. A machine built to kill people will rely on data sets to sort and categorise who should be killed and who should not. It will use sensors and software to “decide” when and who to kill. Based on what we already know about algorithmic bias, AI misidentifications, and intentional “datafication” and categorisation of people of certain genders, sexual orientations, races, or religions as either undesirable or inherent threats, we cannot allow these technologies to be used in weapon systems.

Despite the purported confusion by some delegations about what this would mean in practice, we already have distinctions in international law between weapons designed to target objects and those designed to target humans using sensors. As the civil society group Article 36 pointed out during the consultations, Additional Protocol II of the CCW itself makes a distinction between antipersonnel mines and “MOTAPM”—yet another delightful acronym that means mines other than antipersonnel mines. While antipersonnel mines and MOTAPM can work in the same or similar ways, the question is, against what is the weapon applying force: people or objects. One weapon is prohibited, the other is regulated. At a bare minimum, we need to do the same with autonomous weapons or weapon systems using AI and other autonomous technologies.

Antipersonnel mines, which use sensors to target human beings, have caused unconscionable harm to so many people globally, and are still causing harm today. Most governments support an outright prohibition of these systems and have invested heavily in demining and victim assistance to grapple with the persistent challenges posed by these weapons. We could choose to not go there again, Article 36 urged. We could draw the line now, before anyone gets hurt. Otherwise, existing protections for civilians may be weakened and pulled out of shape as technology comes to calibrate how we understand how a combatant can be identified, or how we can target human beings.

There is also a clear moral dimension to prohibiting antipersonnel weapons with autonomy. Reducing people to data points and applying force to them based on algorithms means that we are objectifying human beings. “Datafication can be deadly,” as Amnesty International warned. WILPF has written about this in relation gender- and race-based violence and patriarchal norms. At the consultations, Canada noted that there is “a growing perspective that fully autonomous weapons systems would not be consistent with a feminist foreign policy nor with the Women, Peace and Security agenda.” (Unfortunately, even though the Canadian foreign minister has a governmental mandate to pursue the prohibition of autonomous weapon systems, it seems to be aligning itself with the positions of the countries developing these systems.)

It is not just civil society that’s concerned with the datafication of death and destruction. Many government delegations, and the International Committee of the Red Cross, have voiced objections to allowing weapons to use sensors and software to identify, track, target, and attack human beings. The Philippines argued it’s time to stop treating autonomous weapons as just another conventional weapon—because they are not. We’re talking about prohibiting machines from attacking human beings. This is about human dignity and morality, not about whether we can or cannot programme an algorithm to somehow respect international humanitarian law. Austria also pointed out that our thinking needs to change from looking at autonomous weapon systems as a weapons category and think instead about the applications of the technologies that will be used inside these systems, and about their implications for law, ethics, morality, dignity, and security. We’re talking about human beings losing control over armed conflict, Austria warned. This is a security concern, and a moral one.

To this end, Brazil, Chile, and Mexico have jointly called for the prohibition of autonomous weapon systems “that cannot be controlled by humans, therefore subject to cognitive and epistemological limitations, as well as algorithm bias,” as well as systems “whose programming might remove human control over critical functions related to the use of force,” and several other conditions. The International Committee of the Red Cross has likewise recommended that the use of autonomous weapons to target human beings, and also “unpredictable” autonomous weapon systems, should be prohibited. The Campaign to Stop Killer Robots takes a similar position, calling for prohibition of antipersonnel weapons and of systems that can’t be meaningfully controlled, and for positive obligations to ensure meaningful human control over all aspects of a weapon system and the use of force. Many other joint and national proposals have called for prohibitions, regulations, and positive obligations in various formulations (see the report on the consultations for details).

The broader risks of mechanised violence

Beyond prohibiting weapons that automatically or autonomously identify, select, track, and attack human beings, however, we need to think carefully about what we’re doing by allowing autonomy and artificial intelligence in weapons at all. Regardless of what a system is attacking, or what it’s designed or programmed to attack, we’re talking about handing over violence to machines. This is already happening—we know some weapon systems already use AI and sensors and other emerging technology to track and intercept missiles or to help select targets for drone strikes. No one seems to want to talk about prohibiting or even really regulating this technology, and this is dangerous. We’re already on the slippery slope.

Prohibiting autonomous antipersonnel weapons is essential, but it is insufficient on its own to prevent harm to people or planet from mechanised and automised violence. As several delegations have consistently warned throughout the GGE discussions, autonomous weapon systems, whatever their parameters, are more likely to lower the threshold for the use of force—following the trajectory of armed drones. They are most likely to be used by countries of the global north against those of the global south. They are weapons of inequality and injustice that will further entrench the inequalities and injustices that haunt our current world order, with an extra twist of dehumanisation through automatisation.

Throughout human history, we’ve spent a great deal of ingenuity figuring out how to kill each other more efficiently rather than addressing the problems that lead to confrontations and conflicts in the first place. Instead of continuing down this road, which has resulted in our planet literally being on fire and war economies dictating foreign policy, we must change course. We don’t have unlimited time—we know what’s coming. We can either build weapons to protect the privileged and the powerful or we can start investing in real solutions that would render weapons unnecessary: degrowth economic and environmental policies, decolonisation and redistribution of wealth and land, demilitarisation and disarmament, and other initiatives for conflict prevention and global equality, care, and peace. Building our own dystopia is not inevitable, but we need to take action to build a better world, now.

[PDF] ()