logo_reaching-critical-will

WILPF makes submission to UN Secretary-General's report on autonomous weapon systems

20 May 2024

In 2023, the UN General Assembly adopted resolution 78/241, which requested the UN Secretary-General to seek the views of states, international organisations, and civil society on autonomous weapon systems (AWS) for a report to be published in 2024. WILPF made a submission to this report that consolidates more in-depth analysis from Reaching Critical Will’s papers on AWS.

WILPF's submission contains the following concerns and recommendations:

International peace and security

The use of force has already become too disengaged from human involvement, through the use of armed drones and weapons operating with artificial intelligence (AI) or autonomous features. AWS further abstract violence from human beings. Algorithms create a perfect killing machine, stripped of empathy, conscience, emotion, judgement, or understanding of human life. AWS would not hesitate to act; they would not take into account extenuating circumstances, nor challenge their deployment or operational mandate. They would simply do as they have been programmed to do—and if this includes massacring everyone in a city, they will do so without hesitation.

AW risk lowering the threshold for war. They present a perception of “low risk” and “low cost” to the military deploying the weapon. This perception increases the scope for the deployment of weapons into situations and to carry out tasks that might otherwise not be considered possible. Having an amoral algorithm determine when to use force means that we will likely see more conflict and killing, not less.

As seen with armed drones, remote-controlled weapons have made war less “costly” to the user of the weapon. Operators do not face immediate retaliation for acts of violence. While this is attractive to militaries that do not have to risk the lives of their soldiers, it raises the cost of war for everyone else. AWS would likely be unleashed upon populations that might not be able to detect their imminent attack and might have no equivalent means with which to fight back. Thus the burden of risk and harm is pushed onto the rest of the world.

War profiteering and global asymmetries

New weapons lead to new war profiteering. The production and proliferation of weapons means profits for corporate CEOs and shareholders. Corporations will be seeking to make money from the development and use of these weapons, and high-tech countries will use autonomous weapons to oppress and occupy others.

Countries of the Global South may not be the ones to develop and use AWS, but they will likely become the battlegrounds for the testing and deployment of these weapons. It will be the rich countries using these weapons against the poor—and the rich within countries using it against their own poor, through policing and internal oppression.

Human rights abuses

Existing military and policing technologies that use AI devalue and dehumanise people, and lead to violations of human rights and international law. AWS will exacerbate this further.

AWS could be programmed to commit acts of sexual violence. Some people who support the development of killer robots have argued that these weapons will be better than human soldiers because they will not rape. But just as sexual violence in conflict is ordered by states and by armed groups using human soldiers, an AWS could be programmed to rape. It is also important to consider the broader culture of rape in relation to weapons and war. Sexual violence is used as a weapon in conflict, and the risk of this kind of violence is also heightened during and after conflict. War destabilises communities and exacerbates already existing gender inequalities and oppression of women, LGBTQ+ people, and others who do not conform to societies’ gender norms.

AWS will also facilitate gender-based violence, including against men, by exacerbating policies and practice that count all cisgendered men as militants. In armed conflict, civilian men are often targeted (or counted in casualty recordings) as militants only because they are men of a certain age. Exacting harm on the basis of sex or gender constitutes gender-based violence. This erodes the protection that civilians should be afforded in conflict and violates many human rights, including the right to life and due process. It also has broader implications in the reinforcement of gender norms. Assuming all military-age men to be potential or actual militants entrenches the idea that men are violent. This devalues men’s lives and increases the vulnerability of men, exacerbating other risks adult civilian men face such as forced recruitment, arbitrary detention, and summary execution.

As can be seen by Israel’s use of AI technologies that generate target lists (Lavender) and target locations (Go Daddy), as well as the use of predictive policing software and border biometric systems in the United States and other countries, AI-enabled technology lends itself to this kind of gender-based violence. Reportedly, the only human checks on Lavender’s kill lists are to ensure the targets are men.

Autonomous and AI technologies in weapon systems will further enable police and militaries to target people based solely on their gender, appearance, location, or behaviours, defining whole categories of people as militants, terrorists, or criminals without any due process. AWS could also be deliberately programmed to target people based on gender, race, socioeconomic status, (dis)ability, and sexual orientation. Just as AWS will lower the threshold for armed conflict, they will also lower the threshold for state violence against people. Police forces will be able to send machines to violently suppress protests and to repress certain categories of people, exacerbating discrimination.

In addition, data sets and the training with this data will cause bias. Parameters, boundaries, labels, and thresholds selected in the design phase necessarily exclude and include. This both creates bias and replicates existing bias within data and social structures. We already see examples in related technologies. Facial recognition software struggles to recognise people of colour; voice recognition struggles to respond to women’s voices or non-North American accents; images of anyone standing in a kitchen are labeled as women; people’s bail is denied because a program deemed a woman of colour more likely to reoffend than a white woman; trans people are surveilled on the basis of the clothing they wear. If such biases are left unchecked, there will be no counteracting human intervention.

Recommendations

The best solution is a legally binding international treaty to prohibit the development, production, and use of AWS.

Technology companies, tech workers, scientists, engineers, academics, and others involved in developing AI or robotics should pledge to never contribute to the development of AWS.

Financial institutions such as banks and pension funds should pledge not to invest money in the development or manufacture of autonomous weapon systems.

States, civil society groups, activists, tech workers, and others should also work to prevent AI-enabled technologies from being used by militaries and police forces. It is not just AWS that are problematic, but the overall automation of violence, as well as sensor-derived target detection, algorithmic bias and software-generated kill lists. These must not be normalised, they must be prevented.

AWS are a product of an arms race that derives from the global system of militarism and war profiteering. This system fuels armed conflict and armed violence, human rights abuses, and other violations of international law. It is therefore important to not just ban AWS, but to dismantle the structures of state violence as a whole.