Fully Autonomous Weapons
Fully autonomous weapons are weapon systems that can select and fire upon targets on their own, without any human intervention. Fully autonomous weapons can be enabled to assess the situational context on a battlefield and to decide on the required attack according to the processed information.
Fully autonomous weapons would act on the basis of an “artificial intelligence”. Artificial intelligence is basically created by arithmetic calculations and programming of the robot. It lacks every feature of human intelligence and human judgment that make humans subject and accountable to rules and norms. The use of artificial intelligence in armed conflict poses a fundamental challenge to the protection of civilians and to compliance with international human rights and humanitarian law.
Fully autonomous weapons are distinct from remote-controlled weapon systems such as drones—the latter are piloted by a human remotely, while fully autonomous weapons would have no human guidance after being programmed. Although weapons with full lethal autonomy have not yet been deployed, precursors with various degrees of autonomy and lethality are currently in use. Several states support and fund activities targeted at the development and research on fully autonomous weapons. Amongst them are China, Germany, India, Israel, Republic of Korea, Russia, and the United Kingdom. Robotic systems with a various degree of autonomy and lethality have already been deployed by the United States, the United Kingdom, Israel, and the Republic of Korea.
Ongoing research and development in the field of fully autonomous weapons have reached a critical stage, requiring in-depth reflection on further technical development of such weapon systems. The debate on fully autonomous weapons raises following fundamental ethical and principle questions:
- Can the decision over death and life be left to a machine?
- Can fully autonomous weapons function in an ethically “correct” manner?
- Are machines capable of acting in accordance to international humanitarian law (IHL) or international human rights law (IHRL)?
- Are these weapon systems able to differentiate between combatants on the one side and defenceless and/or uninvolved persons on the other side?
- Can such systems evaluate the proportionality of attacks?
- Who can be held accountable?
These issues put into question whether or not human abilities, such as the assessment of international humanitarian law principles of proportionality, military necessity, and the capability to make distinctions between civilians and combatants, can be transferred to a machine.
Other issues include:
Protection of civilians: It is questionable how a robot can be effectively programmed to avoid civilian casualties when humans themselves lack the ability to make distinctions in todays’ inter-state conflict settings without clear boundaries between a variety of armed groups and civilians. Distinguishing an active combatant from a civilian or an injured or surrendering soldier requires more than advanced sensory and processing capabilities, and it would be extremely difficult for a robot to gauge human intention, based on the interpretation of subtle clues such as tone of voice or body language.
Proportionality: In certain situations, military attacks are not conducted due to the risk of causing disproportionally high civilian damages. It has been doubted that a robotic system is capable of making such decisions.
Accountability: With an autonomous weapon system, no individual human can be held accountable for his or her actions in an armed conflict. Instead the responsibility is distributed among a larger, possibly unidentifiable group of persons, including perhaps the programmer, or manufacturer of the robot.
Increasing the risk of war: As the UN Special Rapporteur on extrajudicial, summary or arbitrary executions pointed out in his report to the Human Rights Council, the removal of humans from the selection and execution of attacks on targets constitutes a critical moment in the new technology which is considered as “revolution in modern warfare”. He urged states to think carefully about the implications of such weapon systems, noting that such technology increases the risk that states are more likely to engage in armed conflicts due to a reduced possibility of military causalities. Fully autonomous weapons could lower the threshold of war, especially in situations where the opposing side does not have equivalent systems to deploy in response.
Cool calculators or tools of repression? Supporters of fully autonomous weapons argue that these systems would help overcome human emotions such as panic, fear, or anger, which lead to misjudgement and incorrect choices in stressful situations. However, opponents to the development of these weapon systems point out that this so-called advantage can turn into a massive risk to people who live in repressive state systems. Fully autonomous weapons could be used to oppress opponents without fearing protest, conscientious objection, or insurgency within state security forces. The dehumanisation of targets would be matched by dehumanisation of attacks. Algorithms would create a perfect killing machine, stripped of the empathy, conscience, or emotion that might hold a human soldier back.
There are also widespread concerns about programming human bias into these machines. A machine could be biased programmed with the prejudice on the basis of race, sex, gender identity, sexual orientation, socioeconomic status, or ability.
Proliferation: Finally, concerns have been expressed that fully autonomous weapon systems could fall into the hands of non-authorised persons.
Throughout history, we have seen that weapons symbolise power. The association of weapons with power comes from a very particular—and very dominant—understanding of masculinity. This is a masculinity in which ideas like strength, courage, and protection are equated with violence. It is a masculinity in which the capacity and willingness to use weapons, engage in combat, and kill other human beings is seen as essential to being “a real man”.
Fully autonomous weapons are being developed in the context of the aforementioned norms of gender and power. Scholars of gender and technology have long argued that gender relations are “materialised in technology”. That is, the meaning and character (the norms) of masculinity and femininity are “embedded” in machines. These scholars argue that technological products bear their creators mark. If technology is developed and utilised primarily by men operating within a framework of violent masculinity, their creations will be instilled with that framework of thought, knowledge, language, and interpretation.
Fully autonomous weapons, as tools of violence and of war, will likely have specific characteristics that may simultaneously reinforce and undermine hegemonic gender norms. The use of fully autonomous weapons can lead to gender-based violence against men. In conflict, civilian men are often targeted—or counted in casualty recordings—as militants only because they are men of a certain age.While men are not necessarily targeted solely because they are men, taking sex as a key signifier as identity and exacting harm on that basis constitutes gender-based violence. That is to say, if someone uses sex as a basis for assessing whether or not a person is targeted, or if an attack is allowed (are only men present?), or in determining the impact of an attack later (i.e. during casualty recording), then they are using the sex of that person not as the motivation for the attack but as a proxy for identifying militants, or “acceptable targets”. This is gender-based violence. This erodes the protection that civilians should be afforded in conflict and violates many human rights, including the right to life and due process.
It alsohas broader implications in the reinforcement of gender norms, including violent masculinity. Assuming all military-age men to be potential or actual militants or combatants entrenches the idea that men are violent and thus targetable. This devalues male life—it suggests men are relatively more expendable than women. It increases the vulnerability of men, exacerbating other risks adult civilian men face such as forced recruitment, arbitrary detention, and summary execution.
The gendered culture of violent masculinities that surrounds the development of autonomous weapons, likely to be embedded within the technology and its use, will create new challenges for preventing violence, protecting civilians, and breaking down gender essentialisms or discrimination. Understanding how autonomous weapons are likely to be perceived in a gendered way by their developers, operators, and their victims is crucial to developing policies that can help break the cycle of violence. This could include an understanding that the operation of weapons without meaningful human control, weapons programmed to target and kill based on pre-programmed algorithms of who is considered to pose a threat, used without consent in foreign lands or in the streets of local cities, will result in civilian casualties, psychological harm, and destruction of civilian infrastructure. That this in turn will result in a violent masculine response from affected communities, reinforcing gender inequalities and oppressions.
During the Human Rights Council session in April 2013, 24 states attended the presentation of the report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, and discussed the issue of autonomous weapons. Participating states expressed concerns regarding the use of fully autonomous weapons and indicated an interest in continuing discussions.
At the meeting of states parties of the Convention on Certain Conventional Weapons (CCW) in November 2013, governments decided to convene a four-day meeting of experts on the topic of fully autonomous weapons. The CCW, adopted in 1980, regulatesweapons which are not derived from chemical, nuclear, or biological sources.
Since 2014, the states parties of the CCW have discussed how to address the threat of killer robots. In 2016, the Fifth Review Conference of CCW decided to begin a formal process in 2017 to discuss autonomous weapon systems. The meetings since have focused on building a common understanding about the meaning of human control and the risks of fully autonomous weapons. Most governments, along with the International Committee of the Red Cross (ICRC), the UN Secretary General Antonio Guterres, and the Campaign to Stop Killer Robots, have reached the conclusion that humans must maintain control over programming, development, activation, and/or operational phases of a weapon system.
A growing number of states are calling for a pre-emptive ban on killer robots (currently 28 states). Furthermore, the Non-Aligned Movement, the largest bloc of states operating in the UN, has called for a legally binding instrument stipulating prohibitions and regulations of such weapons; Austria, Brazil, and Chile support the negotiation of “a legally binding instrument to ensure meaningful human control over the critical functions” of weapon system. A few others have expressed their interest in non–legally binding mechanisms, such as a political declaration proposed by France and Germany.
Additional support for a prohibition has also come from thousands of scientists and artificial-intelligence experts. In July 2018, they issued a pledge not to assist with the development or use of fully autonomous weapons. This comes on the heels of broader activism from the scientific and technology community against misuse of technology. For example, 4000 Google employees recently signed a letter demanding their company cancel its Project Maven contract with the Pentagon, which was geared toward “improving” drone strikes through artificial intelligence. Twelve hundred academics announced their support for the tech workers. As a response, Google renounced its work on Project Maven. Furthermore, more than 160 faith leaders and more than 20 Nobel Peace Prize laureates back the ban.
However, a tiny handful of states are opposed to legally binding or political responses to the threats posed by autonomous weapons. The United States has argued that governments and civil society must not “stigmatise new technologies” or set new international standards, but instead work to ensure the “responsible use of weapons.” Together with Australia, Israel, Russia, and South Korea, the United States spent a CCW meeting in August 2018 arguing that any concrete action is “premature,” and demanded that the CCW spend next year exploring potential “benefits” of autonomous weapon systems. The CCW operates on the basis of consensus, which is interpreted to require unanimity. This means that these five countries were able to block any moves to stop the development of these weapons.
Despite the vast majority of states, AI experts, academics, and activists agreeing that fully autonomous weapon systems must never be developed or used, the 2018 CCW Meeting of High Contracting Parties, held in November merely decided to continue discussions in the format of two separate GGE sessions in 2019.
In April 2013, a group of non-governmental organisations including WILPF launched the Campaign to Stop Killer Robots in London. The campaign has established a coordinated civil society call for a ban on the development, production and use of fully autonomous weapon systems and seeks to address the challenges to civilians and the international law posed by these weapons. The campaign builds on previous experiences from efforts to ban landmines, cluster munitions, and blinding lasers.
The campaign emphasises the ethical implications of empowering machines to decide between death and life of human beings. It urges states to negotiate a treaty that pre-emptively bans further development and use of fully autonomous weapons. Such a treaty would include the prohibition of development, production, and deployment of fully autonomous weapons. The campaign emphasises that this matter must be regarded as an urgent concern, especially from a humanitarian perspective.
Besides the prohibition through an international treaty, the campaign calls also for prohibition on a national level through national laws and other policy measures.
The campaign has grown into over 100 member organisations calling for a ban of fully autonomous weapon systems and is mobilising an ever growing number of the public to join the campaigns efforts to retain human control over violence.
Click here to read our coverage of UN meetings on autonomous weapons
International Committee of the Red Cross, Artificial intelligence and machine learning in armed conflict: A human-centred approach, 6 June 2019
Frank Slijper, Alice Beck, and Daan Kayser, State of AI—Artificial intelligence, the military and increasingly autonomous weapons, PAX, April 2019
Melissa Chan, The rise of the killer robots – and the two women fighting back, The Guardian, 8 April 2019
Dr. Emilia Javorsky, Ray Acheson, Rasha Abdul Rahim, and Bonnie Docherty, Why Ban Lethal Autonomous Weapons, Future of Life Institute Podcast, 2 April 2019
Erin Hunt, Why ‘killer robots’ are neither feminist nor ethical, OpenCanada.org, 22 January 2019
Campaign to Stop Killer Robots (2018). Fragile diplomatic talks on killer robots limp forward. Public pressure is essential to nations retaining human control over the use of force, 26 November 2018
PAX (2018). Crunch Time - European positions on lethal autonomous weapon systems. November 2018
Campaign to Stop Killer Robots France (2018). Why France must oppose the development of killer robots, November 2018
Acheson, Ray (2018). To Preserve Humanity, We Must Ban Killer Robots, The Nation, 1 October 2018
WILPF (2018). WILPF statement to the UN Group of Governmental Experts on autonomous weapons, 28 August 2018
WILPF (2018). WILPF statement to the UN Group of Governmental Experts on autonomous weapons, 13 April 2018
Reaching Critical Will (2017). Group of governmental experts on autonomous weapons concludes its work for 2017 with momentum building for a new treaty, 20 November 2017
Stockholm International Peace Research Institute (SIPRI, 2017). Mapping the Development of Autonomy in Weapon Systems, November 2017
PAX (2017). Where to draw the line - Increasing Autonomy in Weapon Systems, Technology and Trends, November 2017
PAX (2017). Keeping Control: European Positions on Lethal Autonomous Weapons Systems, November 2017
Reaching Critical Will (2016). States agree to a formal process on autonomous weapons as Fifth CCW Review Conference ends, 19 December 2016
International Committee of the Red Cross (ICRC, 2016). Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons, August 2016
Reaching Critical Will (2016). UN agrees to more talks on autonomous weapons as support for prohibition grows, 18 April 2016
WILPF (2016). WILPF Statement to the 2016 CCW meetings of experts on lethal autonomous weapon systems, 12 April 2016
Reaching Critical Will (2015). Bombing, burning, and killer robots: report from the 2015 CCW meeting of high contracting parties, 13 November 2015
WILPF (2015). WILPF Statement to the 2015 meetings of experts on lethal autonomous weapon systems, 13 April 2015
Reaching Critical Will (2014). Killer robots in the 26th Human Rights Council, 30 June 2014
Reaching Critical Will (2014). Autonomous weapons firmly on international agenda, 18 May 2014
WILPF (2014). WILPF Statement to the CCW Meetings of Experts on Lethal AutonomousWeapon Systems, 13 May 2013
Human Rights Watch (2016). Making the Case - The Dangers of Killer Robots and the Need for a Preemptive Ban, 2016
Human Rights Watch (2015). Mind the Gap: The Lack of Accountability for Killer Robots, April 2015
Article 36 (2015). Killing by Machine, April 2015
Human Rights Watch (2014), Shaking the Foundations. The Human Rights Implications of Killer Robots, 12 May 2014
UNIDIR, (2014). Framing Discussions on the Weaponization of Increasingly Autonomous Technologies, April 2014
Pax (2014). Deadly Decisions - 8 objections to killer robots, February 2014
Reaching Critical Will (2013). CCW adopts mandate to discuss killer robots, 15 November 2013
Article 36 (2013). UK says killer robots will not meet requirements of international law, 18 June 2013
Reaching Critical Will (2013). Growing momentum to prevent killer robots, 30 May 2013:
Campaign to Stop Killer Robots (2013). Urgent Action Needed to Ban Fully Autonomous Weapons: Non-governmental organizations convene to launch Campaign to Stop Killer Robots, 23 April 2013
Docherty, Bonnie (2012). The Trouble with Killer Robots: Why we need to ban fully autonomous weapons systems, before it's too late, Foreign Policy, 19 November 2012
Human Rights Watch (2012). Ban ‘Killer Robots’ Before It’s Too Late: Fully autonomous weapons would increase danger to civilians, 19 November 2012
Heyns, Christof (2013). Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, 9 April 2013
Krishnan, Armin (2009). Killer Robots: Legality and Ethicality of Autonomous Weapons. Ashgate Publishing.
For more up-to-date resources, see Campaign to Stop Killer Robots - Recommended reading