logo_reaching-critical-will

Presentation on the Legal and Humanitarian Implications of Bias in Military AI

On 27 October 2025, RCW Director Ray Acheson spoke at a First Committee side event at the United Nations hosted by the Stockholm International Peace Research Institute (SIPRI) and the Permanent Mission of Germany to the UN on the legal and humanitarian implications of bias in military AI.

27 October 2025

Thank you for conveining this panel and inviting me to speak.

The SIPRI report does a helpful job of laying out what bias is and and how it can manifest in military systems, and the legal implications of this.

In my remarks, I want to emphasise that while bias is human, it is not only replicated but amplified by AI.

Algorithms used in AI technologies are written by small groups of coders, who are in turn given instructions by an even smaller group of decision-makers. These technologies are largely being developed by people who have particular ideas about how society should operate, which include ideas about gender, racial identities, disabilities, and more. The people developing this technology also tend to have ideas about what forms of harm and lethality are permissible.

While some well-meaning organisations or governments have pushed for programmers to account for bias, the world’s largest AI producers are working in the opposite direction, to eliminate any accounting for bias. That is, they are not just actively building biased algorithms and technologies but are rejecting any attempts to mitigate such bias.

It is helpful to think through some specific examples of bias in AI and algorithms. Bias in data sets, design, and use can each lead to violence and harm against groups of people. What we already know: Facial recognition software struggles to recognise people of colour; voice recognition struggles to respond to women’s voices or non-North American accents; photos of anyone standing in a kitchen are labelled as women; people’s bail is denied because a program decided that a woman of colour was more likely to reoffend than a white woman.[1] A key problem is that many “benchmark datasets” are biased—they are composed predominantly of male and lighter-skinned faces.[2]

But beyond the problem that the bias embedded in programming will translate into mistakes in identifying targets, there is also the risk that the machine’s bias would not be a mistake at all. It could be deliberately programmed to target people bearing certain “markers” or identities. Trans people have been marked surveillance on the basis of the clothing they wear.[3] Predictive policing software relies on racial profiling, geographic locations, and socioeconomic status to determine criminality.[4]

We can look at existing targeting practices for armed drones as a warning about AI. The US military practice of “signature strikes” has resulted in thousands of civilian casualties in drone strikes. Armed drone attacks are generally conducted on the basis of “intelligence” collected from video feeds, email, social media, spy planes, and mobile phones. This information is analysed for patterns using algorithms.[5] People—individuals or groups—are then attacked on the basis of observed characteristics, with no substantial intelligence regarding actual identity or affiliations.[6] They are attacked based on “packages of information that become icons for killable bodies on the basis of behavior analysis and a logic of preemption.”[7]

The same risks apply to weapon systems using AI. If weapons are programmed to target and attack people using software and sensors, the risks of mistaken identity or unlawful attack run high. With AI systems, the “cultural dispositions” used in signature strikes could be programmed right into the machine. That is, targets—based on the “signatures” of human beings—will be written into algorithmic code, which the weapon system will then use to execute its mission with minimal or no human intervention or guidance. And, this is also where tech bias comes in—even with some human oversight, there is bias of humans assuming that “tech knows best”.

An important case study for these various forms of bias is the use of the AI system Lavender by Israel to conduct its genocide in Gaza, which has led to the marking of tens of thousands of Palestinians as “terrorist” suspects that are turned into “legitimate” targets for assassination. Investigative reporting has found that the only human supervision over the AI targeting systems in this case amounted to checking to see if the selected target was male—a check that one source said took about 20 seconds and was essentially just a stamp of approval on the AI system’s choices.[8] It’s important to note that this categorisation of all men as “terrorists” or militants is an act of gender-based violence. 

These are all things that need to be considered in deliberations of military AI. And there are other problems with AI beyond bias, beyond the scope of this panel, including that AI hallucinates, that it has been devasting when used in the provision of social services or debt collection, and of course, that AI is environmentally catastrophic. The work of tech workers, tech journalists, academics, and activists working on AI more broadly need to be brought into this conversation.

So, the question has to be asked, we do we not just preclude AI from military systems? Why do we have to weaponise everything? The risks are higher than any hypotheticals about positive benefits, for which there is no evidence thus far, just assertions. But the harm is well known across many sectors. States should seriously question why the integration of AI into military systems is being considered at all, and work to stop it.

For more information and recommendations, see WILPF’s Submission to the UN Secretary-General’s Report on Artificial Intelligence in the Military Domain.

Notes

[1] Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning 81 (2018): 1–15.

[2] Joy Buolamwini, “Response: Racial and Gender bias in Amazon Rekognition—Commercial AI System for Analyzing Faces,” Medium, 25 January 2019.

[3] Toby Beauchamp, “Artful Concealment and Strategic Visibility: Transgender Bodies and U.S. State Surveillance After 9/11,” Surveillance & Society 6(4): 356–366.

[4] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” ProPublica, 23 May 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing; Tim Lau, “Predictive Policing Explained,” Brennan Center for Justice, 1 April 2020, https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained.

[5] For details of these processes, see Cora Currier, “The kill chain: the lethal bureaucracy behind Obama’s drone war,” The Intercept, 15 October 2015.

[6] Kevin Jon Heller, “‘One Hell of a Killing Machine’: Signature Strikes and International Law,” Journal of International Criminal Justice 11, no. 1 (2013): 89–119.

[7] Lauren Wilcox, “Embodying algorithmic war: Gender, race, and the posthuman in drone warfare,” Security Dialogue, 48, no. 1 (2017): 6.

[8] Yuval Abraham, “ ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza,” +972 Magazine, 3 April 2024, https://www.972mag.com/lavender-ai-israeli-army-gaza.

[PDF] ()