logo_reaching-critical-will

CCW Report, Vol. 5, No. 2

Confronting reality: we can build autonomous weapons but we can't make them smart


Ray Acheson
14 November 2017

Download full edition in PDF

On the first day of the group of governmental experts (GGE) on lethal autonomous weapon systems (LAWS), government delegations and expert panelists outlined many of the layered and complex challenges posed by autonomous weapons. From the panel on technological dimensions of LAWS in the afternoon, the warnings are clear: while we are on the cusp of being able to engineer autonomous weapons, we are not able to code such weapons with the necessary human judgments, norms, and laws to adequately control their behaviour or ensure compliance with international humanitarian or human rights law. The nuance and abstraction of human relations cannot be incorporated with systems that exist now or for the foreseeable future, but we can build “stupid” autonomous weapon systems now. As the development of autonomy in weapons continues outside of the UN, this poses a dangerous challenge to diplomats and those wanting to prevent a humanitarian catastrophe.

Despite the clear warnings from the technology panel, and from the letters written by scientists and AI experts over the past few years, there are still clear divisions between those states that want to prohibit the development of the weapons now and those that want to “wait and see” what happens with the technology. Yet the majority of voices throughout the first day of this GGE expressed caution and concern with the potential ramifications of further automating our means and methods of warfare. Recognising the increasing autonomy already present in weapon systems, the Holy See urged states to “place ethics above technology” in their deliberations and their actions. Austria warned that to ensure human security, we must find answers to the questions we have been deliberating on since 2014, before diplomacy is outpaced by technological developments.

General exchange of views

Monday morning kicked off a general exchange of views amongst states participating in the GGE. This conversation will continue Wednesday morning, but there is already concern about the lack of clear direction for the meeting, and an obvious difference in approach to the development of new technologies of violence.

Almost all participants expressed concern with the legal, ethical, and technical challenges posed by LAWS and agreed that work is needed to ensure that any future weapon systems comply with existing international law. But until now, only 19 states have expressed the willingness and desire to preemptively prohibit the development and deployment of these systems.

The approach of those states that have called specifically for a ban draws on the lessons of history. As the Holy See articulated, our international legal framework has only been developed after grave tragedies have occurred. It called on states to avoid this “irrational lagging behind” when it comes to autonomous weapons.

Cuba argued that the adoption of practical actions towards the development of a legally binding instrument on autonomous weapons is crucial, emphasising the necessity of a preemptive approach based on the precautionary principle. Cuba and Pakistan both said that machines cannot replace human judgment, with Pakistan arguing that the absence of human intervention will make future wars more inhumane, lower threshold for use of force, and increase the practice of targeted killing and clandestine operations. Because of this, Pakistan called for LAWS to be preemptively banned through a legally binding CCW protocol and urged all states to put in place a moratorium on their development in the meantime.

Iraq called for practical steps towards prohibition at national and international levels, but urged that a ban on LAWS must not hamper peaceful uses of aspects of the technology.

There is a precedent for preemptively banning certain weapon systems. As Panama noted, the CCW prohibited blinding laser weapons before they were deployed. Panama urged the GGE to consider a legally binding prohibition of autonomous weapons, taking into account recommendations of the Human Rights Council and Special Rapporteur on extrajudicial killings in this regard.

Pushing back on those that suggest autonomous weapons cannot be prohibited because so much of the technology is dual-use, Chile pointed out that almost all military technology has civilian uses. Similarly, to those states that say LAWS do not yet exist and therefore should not be dealt with by a legally binding prohibition, Chile argued that if they don’t exist then states shouldn’t mind if they are prohibited.

In an interesting development on Monday, the Non-Aligned Movement indicated its support for a new legally binding instrument on LAWS. While it did not specifically call for a ban, it encouraged the GGE to consider elements for such an instrument. In this regard, Brazil noted “there is enough critical mass now to move to a more structured approach by developing a new legally binding instrument.” Similarly, Sri Lanka noted that unilateral measures such as weapons reviews “should not be equated as the final outcome of our deliberations,” as the majority of states “believe that efforts should reach beyond the national levels, culminating in the development of an agreed international framework.”

However, there are those who believe that a prohibition or any legally binding treaty is “premature,” such as Australia and Germany. In their working paper, France and Germany suggest a political declaration on LAWS. The European Union, Belgium, Spain, and Switzerland indicated preliminary support for this approach. Switzerland said it believes a political declaration on LAWS could constitute a practical and achievable next step with a number of benefits. Spain said it would also be prepared to consider adoption of a future code of conduct containing a register of transparency measures and other concrete actions on LAWS.

Pakistan argued that a political declaration or other voluntary measures can only be an interim step towards a legally binding mechanism. Chile said a political declaration cannot substitute for a legally binding prohibition.

Sri Lanka welcomed commitments to not develop LAWS, but argued that “well-founded concerns of the international community underlines the fact that precursors for such weapons are already in existence and failure to take pre-emptive action at this point poses the risk of such weapons being fully developed and deployed in the future.”

Some of the delegations speaking on Monday, including those of the European Union, Australia, and Cambodia prefer an approach of encouraging national weapons reviews. Most of these states emphasised the importance of these reviews being transparent, with opportunities for states to exchange best practices and other information about their assessments. Spain indicated support for voluntary transparency and confidence building measures and exchange of information on LAWS, which could take place in legal reviews of weapons and in relation to other regulatory or technical aspects of research and development.

Some see weapon reviews as necessary but insufficient, including Austria, Brazil, New Zealand, and Pakistan. This is largely due to national perspectives and bias present in such reviews, and the general lack of transparency around them.

Amongst all these various approaches to LAWS, the one common thread seems to be a shared understanding that some form of human control must be retained over weapon systems. For the majority of states participating in Monday’s exchange of views, this includes control over the selection and engagement of targets.

The Holy See noted that legal decisions often require interpretation of the rules in order to preserve the spirit of those rules. Machines, it warned, can never make such judgments. Panama echoed this concern, arguing that it is unacceptable for machines to take independent decisions on life and death, as a high level of human reasoning is required to analyse intentions or take decisions on the proportionality of an attack. Austria agreed that retention of meaningful human control must remain at centre of all our deliberation on LAWS.

Spain said autonomous weapons would be contrary to international jus cogens, which refers to certain fundamental, overriding principles of international law, from which no derogation is ever permitted. There must always be a human operator in use of force, Spain argued. This view was shared by Morocco, which argued that the idea that machines could have decisions over life and death steps over a fundamental moral line. Cambodia, Iraq, and Kazkahstan agreed that machines should not be making life or death decisions. Brazil said states must maintain an element of meaningful human control over weapons, while Germany said humans should make the “ultimate” decisions.

New Zealand believes meaningful human control must be retained over weapon systems, though argued that it is not the answer to every question. Such control is required for every use of lethal force, New Zealand explained, but more is needed to regulate LAWS. It expressed support for UNIDIR’s suggestion that states should agree on what functions humans must have control over, and what technologies challenge that agreement.

All speakers so far have also agreed that existing international law applies to any development or use of LAWS. The European Union urged the GGE to focus on ensuring compliance with international humanitarian law (IHL) and international human rights law (IHRL) in particular. Similarly, Switzerland urged the GGE to spell out applicable law and develop convergence on the idea that compliance with IHL should be at the heart of the debate.

Several delegations, including Brazil, Morocco, South Africa, and Spain, said all weapons must comply with IHL and IHRL. Sri Lanka agreed that the issue of IHL compatibility must be central to deliberations on LAWS, but also questioned whether existing legal regimes can effectively and sufficiently address the IHL concerns emanating from LAWS, including to what extent individuals, organisations, or states can be held liable from crimes committed by LAWS. “Whether IHL rules can be applied in the context of autonomous weapons in the first place therefore has to be revisited given that IHL was developed with the aim of regulating the conduct of human beings during warfare,” cautioned Sri Lanka. Similarly, Republic of Korea agreed that an IHL-compliance approach is best, but the problem in the context of LAWS is that IHL sees human beings as the key agents. In this vein, Chile expressed concern that autonomous weapons may leave both civilians and combatants without adequate protection of the law.

Cuba argued that LAWS cannot comply with IHL. Chile said it’s not clear that LAWS can comply with any basic norms of IHL or IHRL. Austria argued that the assessment of compliance with international law has to be taken on case-by-case basis; it cannot imagine the complexity required to comply with IHL could be met with algorithms or even deep learning.

In addition to IHL challenges, a number of states outlined the human rights challenges posed by LAWS. Costa Rica highlighted problems such weapons pose to the rights to life, a fair trial, peace assembly, and more. It asserted that the ethical requirements of human dignity and human rights demand human involvement in lethal decisions. Panama likewise argued that autonomous weapons are a disarmament issue but also a human rights issue, including in terms of the right to human dignity.

For Brazil, Costa Rica, and Greece, the Martens Clause offers a guide to dealing with autonomous weapons. Costa Rica argued that LAWS would violate this clause, which refers to the “principles of humanity and the dictates of the public conscience”. Brazil suggested the Martens Clause could offer a conceptual bridge between approaches to LAWS.

Another concern is the lack of accountability for crimes committed by autonomous weapons. Costa Rica noted the lack of an unequivocal chain of responsibility. South Africa also expressed concern, arguing that increasingly autonomous technologies will make it more difficult to attribute accountability in warfare because they will make it more difficult to determine the chain of command. Panama expressed concern that LAWS would make accountability unfeasible, leading to impunity over attacks.

Spain argued that responsibility must lie with operator and those ordering use of weapon. Bulgaria echoed this call, noting that this approach would ensure that states are more cautious during their national weapons reviews. Morocco said human responsibility must be at heart of concerns about LAWS. Brazil emphasised that shared responsibility can’t mean lack of accountability.

Most participants also encouraged the GGE to look at definitions—of the technology itself, as well as of meaningful human control. The European Union suggested the GGE focus on defining key characteristics of LAWS, while Belgium highlighted its working paper looking at definitions for autonomy, unpredictability, and intentionality. Switzerland urged the GGE to elaborate a working definition of LAWS and of necessary human control or judgment. Brazil, Cambodia, Greece, and the Netherlands supported the development of such working definitions.

Bulgaria called for the “touchstone” of a working definition of LAWS to include whether the system selects and engages target without human control. Austria called for a common understanding of the necessary human control required to ensure compliance with IHL. Japan said meaningful human control is an “instrumental concept” in deliberation definitions of LAWS. Brazil said it is useful but lacks precision and needs more refinement.

Morocco called for a consensus-based definition of LAWS. India, on the other hand, warned against jumping to consensus on definitions or on regulations based on incomplete understandings of the technology.

Spain suggested that a proper definition of LAWS should not include automated defence systems for ships or aircraft, or armed drones.

Japan argued that it is not easy to decide where to draw line between civilian and military use of autonomy in technology, noting increasing tendency towards automation in military technology in many countries. However, Japan argued, we must consider regional situations and determine possible merits or drawbacks to this trend.

A number of countries highlighted proliferation risks in their remarks. Pakistan argued that states currently developing autonomous weapons cannot be assured against proliferation over time, noting that an unchecked robotic arms race could ensue, resulting in proliferation to non-state actors “with unimaginable consequences”. Australia expressed concern about the proliferation of low-cost autonomous weapons, noting that precursor materials will not be traceable, making possible the transfer to non-state armed actors.

Spain urged GGE deliberations to include the goal of preventing proliferation, including to terrorists. Iraq and Morocco also expressed concerns about proliferation.

A number of delegations, including the Non-Aligned Movement, expressed concern with a potential arms race in autonomous weapons. The Holy See warned that in the absence of preemptive action, military interests pose the risk of stimulating the development of autonomous weapons, which as with nuclear weapons could lead to a destabilising arms race. Chile, Iraq, Morocco, Panama, and Spain also expressed concern about an international arms race.

Brazil and Pakistan, among others, expressed concern that LAWS could lower the threshold for the use of force. Brazil also worries that such weapons could compress decision making time on the use of force.

Panel 1: Technological dimensions

The Monday afternoon session featured six experts from industry and academia. The Chair tried to keep the discussion focused on artificial intelligence (AI) and autonomy at large, but since this is a GGE on the weaponisation of such technologies, participants kept returning to questions of LAWS.

Professor Margaret Boden, University of Sussex, described early work on AI. Tracing various developments to the modern day, she concluded that even 50 years from now there will be no such thing as an ethical robot. “They are not moral beings,” she warned. Humans can try to make them ethical, but responsibility for its actions can never lie with the robot—it will always lie with a human being or human institution.

Professor Gary Marcus, New York University, highlighted some of the challenges facing the sophistication of AI. Such systems are currently able to distinguish between objects when there is a lot of data, but otherwise struggles even to do this. More concerning, AI systems cannot yet deal with anomalous situations or with common sense. Where machine learning has been successful so far, it is only in systems that are designed for one task—they only learn one thing. But just because machines still aren’t very smart, he cautioned, doesn’t mean we shouldn’t be concerned. AI that doesn’t really understand the world may be scarier than AI that does.

Mr. Gautam Shroff, Tata Consultancy Services, India, spoke about the challenges of debugging AI systems, using autonomous vehicles as an example. He expressed concern with the popular conflation of AI and IT systems, noting that you can put IT systems through a series of tests to debug them, but AI systems are just supposed to learn. With the concept of LAWS, the consequences of mistakes are much worse than with an autonomous vehicle. He urged states to consider the ethical aspects rather than just the control of technology.

Mr. Harmony Mothibe, BotsZA, South Africa, focused on challenges with AI research and development in other applications such as agriculture. He echoed the challenges of data and the degree of interpretation in processing languages.

Professor Stuart Russell, University of California, Berkley, highlighted the positive benefits of AI systems whilst warning against the development of LAWS. He said that while the feasibility of autonomous weapons is not in question, the feasibility of their compliance with IHL and other international law certainly is. He argued that even if compliance with IHL is achievable, it is not sufficient, noting the moral considerations at play. He also argued that LAWS should be considered weapons of mass destruction—they would be cheap, indiscriminate, and easily proliferated. They could also allow for the introduction of cyber warfare into the physical domain—LAWS could be turned against their state’s own population.

Mr. Sean Legassick, DeepMind, said his company starts from the principle that all technology should remain under adequate human control. This will depend on environment in which the weapon is deployed, the potential for unforeseen circumstances, and the scale and complexity of potential harm. For weapons system, he argued, the potential for harm is high; therefore the principle of human control must be equally high. Given unconstrained environment of armed conflict and potential for unforeseen circumstances, Legassick argued, no weapon system should be deployed without human control.

During the ensuing discussion, most states engaging with questions or comments expressed their appreciation for the frank warnings against development of LAWS. Sierra Leone emphasized that these weapons must never be made to exist, given their inherent incompatibility with IHL and human rights, as well as the possibility of cyber attacks and of their malfunctioning.

Brazil raised questions about bias in setting targets, which several of the panelists acknowledged is a significant problem. Most agreed that crude autonomous weapons could be deployed now, which could be set to target a single category of human being and would do so without any complexity or understanding of human relations, complexities, or nuances. When Russia argued that we are too far away from the deployment of LAWS, the panelists disagreed. They argued instead that we are too far away from the deployment of LAWS that could comply with IHL or human rights, but the crude versions described above are ready now.

France argued that states must retain human capacity for ultimate decisions of the use of force and for appropriate human control over weapons. It said there is no question of deploying new weapons other than in full compliance with IHL—though when looks at how conventional weapons are being used in violation of IHL every day in various countries, and that certain countries are selling weapons to be used in violation of IHL, such a statement is of little comfort.

Sweden asked if a ban on autonomous weapons could be verified, while Cuba asked if such a ban would necessarily interfere with peaceful uses of AI or other autonomous technologies. On the question of verification Russell and Marcus agreed that a ban could be verified in a similar fashion to how the Chemical Weapons Convention is verified in cooperation with industry, while both argued that it will be more difficult to verify or attribute misuse of autonomous weapons if deployed. Their algorithms could be overridden to change the parameters of the original mission, for example. The panelists also seemed to agree that a ban on LAWS would not interfere with other uses of the technology. Russell noted that the UK ban on automatic weapons doesn’t prohibit other uses of metal; it would be the same with AI.

[PDF] ()