CCW Report, Vol. 6, No. 7
A simple premise: programmes should not end lives
28 August 2018
Ray Acheson | Reaching Critical Will of WILPF
The UN group of governmental experts on “lethal” autonomous weapon systems (AWS) resumed its work yesterday, opening with a panel discussion on potential military applications of related technologies. It’s arguably a bit late in the game for more expert panels, which have been a feature of the past several years of work on this issue in the context of the Convention on Certain Conventional Weapons (CCW). The objective of this week’s meeting is to recommend to the CCW high contracting parties what action to take on AWS next year. The majority of states participating in this process have indicated a preference of the negotiation of new international law, with at least 26 countries calling for a ban. That said, Monday’s opening panel at least served to amplify the need for an urgent legal response to AWS, by illuminating once again the challenges and unacceptable risks posed by these potential weapons.
As a key point of discussion, for example, the conversation demonstrated the risks posed by the inherent unpredictability and uncertainty of AWS. Knut Dörmann, chief legal officer of the International Committee of the Red Cross (ICRC), argued that if machines can self-initiate an attack, this introduces uncertainty in terms of the location, timing, and nature of this attack. This implies significant risk that the machine will not be able to comply with international humanitarian law (IHL), e.g. in terms of distinction, proportionality, or precaution. A lieutenant colonel from West Point Military Academy tried to argue that the whole point of AWS is to help militaries adapt to uncertain environments. He suggested that such systems would make more accurate “decisions” than humans in less time. But as was clear from the ensuing discussion, this is not a widely held view.
Furthermore, as other panelists and government delegations raised, AWS pose significant challenges when it comes to preventing system errors; dealing with unpredictability of machines and the environments in which they may operate; and ensuring the security of AWS against cyber attacks. National-level weapon review processes have been previously suggested as a possible solution to address these concerns, but critics have continued to demonstrate that such reviews are not sufficient. Reviews must work to ensure human control over weapon systems, said Dörmann, but do not constitute human control.
At the end of the day, most states, international organisations, and civil society groups participating in these discussions have already agreed that the priority in our work must be preserving human life and dignity. International humanitarian law and human rights law have been at the forefront of the efforts to initiate discussions on AWS, and for most delegations have provided the essential backbone of their perspectives and positions. In addition, ethics and morality arguably provide the most compelling arguments against the development and use of AWS. “Decisions that end lives should not be made by a program,” tweeted Amr Gaber, an engineer and a volunteer with Tech Workers Coalition. “There is no design or ethics cleverness that will change this fact.”
The need for meaningful human control over the use of force, over decisions about life and death, or over the “critical functions” of selecting and engaging targets, has emerged as a broad consensus amongst the overwhelming majority of GGE participants. As the Austrian delegation argued, this is what states involved in this process should focus on now, rather than the technicalities of autonomy or the technology. “Human control is not an alternative,” said Ambassador Thomas Hajnoczi. “It is a must if we want to stay within established legal and ethical frameworks.” It is a requirement for conforming to IHL, he argued, and there are legal and moral prohibitions on delegating the authority to kill to machines.
Other state delegations clearly agreed that the concept of meaningful human control is where efforts must be focused. During yesterday’s discussion of characteristics of AWS, several states outlined their interest in ensuring the involvement of human judgment in attack decisions, the ability to cancel attacks, and accountability of operators. Such elements are essential to ensure that the use of force remains with humans rather than machines—an objective that this clearly reflected in the majority of government positions elaborated through the past few years of CCW discussions, as well as in the UN Secretary-General’s new disarmament agenda released earlier this year. This should provide the focus for moving forward with deliberations this week, and should inform a recommendation for the development of new international law next year. After five years of discussions about definitions, characteristics, and possible approaches, it’s time for the CCW to start negotiating a treaty banning autonomous weapons.