logo_reaching-critical-will

CCW Report, Vol. 9, No. 6

Editorial: Constraining militarism and controlling weapons
23 September 2021


Ray Acheson | Women's International League for Peace and Freedom

Download full edition in PDF

On 15 September, UN High Commissioner for Human Rights Michelle Bachelet called for certain applications of artificial intelligence (AI) to be prohibited and for others to be subject to a moratorium, based on the risks they pose to human rights. This followed the publication by her office of a report that analyses how AI—including profiling, automated decision-making and other machine-learning technologies—affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.

“We cannot afford to continue playing catch-up regarding AI—allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact,” argued Bachelet. “The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”

This is a crucial and important step forward to protecting human rights from the grave risks posed by AI. The same measure of urgency and action is imperative for autonomous weapon systems (AWS). The recognition of how AI can undermine human rights must also be considered regarding the right to life and dignity.

At the virtual conference hosted by Austria on 15–16 September, New Zealand’s Minister for Disarmament and Arms Control Phil Twyford noted, “If we don't create new binding rules on autonomous weapons, the risk is that we’ll see a more dangerous and unstable world; that we’ll undermine international humanitarian law.” This precautionary approach matches that urged by Bachelet for AI. It is absolutely essential for ensuring that meaningful human control is maintained over weapons and the use of force. As Austrian Foreign Minister Alexander Schallenberg and Gilles Carbonnier, Vice President of the International Committee of the Red Cross, argued at the Austrian conference, “death by algorithm” is unacceptable and must be prevented.

The urgency of action

The dangers posed by AWS for human rights, international humanitarian law, and peace and security have been discussed for years at the Convention on Certain Conventional Weapons (CCW), the Human Rights Council, and the UN General Assembly First Committee. We can already see the harms caused by other AI and machine-learning systems: bias and discrimination leading to incarceration, harassment, surveillance, denial of goods and services, refusal of entry at borders, misidentification of individuals, etc., as well as failures in software and sensors that lead to malfunctioning systems and misinterpretation of data. Weaponising this technology will inevitably lead to grave violations of human rights, human and environmental harm, and global violence.

The time to act against AWS is now, before they are developed and used. Autonomy in weapon systems is already increasing. We know that the handful of governments preventing progress on this issue at the CCW already have advanced capacities in designing, building, and even using these systems. But it is not yet too late to establish binding prohibitions and regulations against AWS. As Minister Twyford pointed out, not acting against these weapons now risks the development of a new political economy, in which these technologies are linked to powerful economic and military interests of violent countries. This will make these governments, and their military-industrial-tech complexes, even more resistant to the establishment of global rules and norms.

As journalist William M. Arkin writes in The Generals Have No Clothes, war persists because the apparatus of people, bases, and weapons has grown so vast that it can no longer be understood, much less controlled; it has become “a gigantic physical superstructure” that “sustains endless warfare.” The perpetual war, Arkin contends, is “a physical machine, and a larger truth, more powerful than whoever is president,” and the result has been “hidden and unintended consequences, provoking the other side, creating crisis, constraining change.”

Standing up for humanity by constraining militarism

Even prior to the advent of AWS, we are already facing a crisis of control when it comes to restrictions and restraints on weapons, the use of force, and military aggression. As Dr. Patricia Lewis noted in her remarks at Austria’s conference, certain states feel entitled to shirk their responsibilities and commitments and to act however they feel suits their own interests, rather than looking out for collective security and the good of humanity. This has posed a serious challenge to international law and norms that must be confronted. It is up to the so-called international community, if it wants to be an actual community, to stand up for the principles of humanity and the dictates of public conscience.

When it comes to the dangers posed by AWS, we cannot say we didn’t know. We cannot say, as many scientists did about the atomic bomb, that they didn’t realise the horrors they were unleashing upon the world until it was too late. Tech workers, scientists, roboticists, activists, and human rights organisations have warned for years now that this technology will harm humanity in a multitude of ways. And given what we know already about the political economy of war, it is never too early to act against investments in new weapons. It’s time to take a stand on what Minister Twyford described as “one of the great moral issues of our time,” and develop new legally binding rules prohibiting AWS.

[PDF] ()