logo_reaching-critical-will

Final Edition, Vol. 1, No. 5

Editorial: Ethics, law, and the principles of humanity
Ray Acheson | Reaching Critical Will of WILPF


Download full PDF

The imperative of maintaining meaningful human control over targeting and attack decisions emerged as the primary point of common ground at this first multilateral meeting on autonomous weapons systems. Many states participating in the meeting called for the retention of meaningful or effective human control over weapons, highlighting the moral, ethical, legal, and operational considerations that require such control. As CCW high contracting parties consider the next mandate for work on this issue, the concept of meaningful human control could provide a basis for the prohibition of weapons where this control is not possible.

The Chair’s report reflects this emerging consensus around this concept. It notes that many delegations said the notion of meaningful human control could be useful to address the question of autonomy, while others said this concept requires further study. During the wrap-up session, Austria and Croatia spoke strongly in favour of meaningful human control being the basis for further action. Austria’s delegation firmly stated that weapons without meaningful human control are in contravention of international humanitarian law (IHL) while the delegate of Croatia called for the principle of meaningful human control to be seen as a fundamental element of IHL.

Only India and the United States seemed hesitant on the final day about endorsing the idea of meaningful human control. India’s delegation said it is a vague concept and the US argued it does not capture the full range of human activity in the development, acquisition, deployment, and use of autonomous weapons. Pakistan’s delegation also critiqued the call for meaningful human control, suggesting it might be a “weak” position if it is seen as an alternative to a ban. Thus it argued that the concept of meaningful human control is only useful if it leads to the development of a legal instrument prohibiting autonomous weapons.

A ban was one of four options cited in the Chair’s report for moving forward. A number of states did endorse such an approach during the experts meeting, though most indicated their preference for further discussions before pursuing a particular track. Steve Goose of Human Rights Watch noted at the final side event on Friday that no delegations participating in this meeting issued a strong justification for autonomous weapons or indicated that they are actively pursuing the development of such weapons. However, many states seemed to be trying to leave the door wide open for future acquisition. A few delegates asserted that autonomous weapons could comply with international law, spoke about potential benefits in comparison with human soldiers, and suggested that national level legal reviews under article 36 of Additional Protocol I to the Geneva Conventions would be sufficient.

However, unilateral reviews of weapon systems under article 36 are not likely to provide a robust framework for control. The majority of delegations highlighted the importance of multilateralism in this regard, given the varied operational and ethical risks of autonomous weapons. Several delegates—and civil society—also highlighted the importance of acting preemptively. It is not often that the international community has an opportunity to prevent a potentially dangerous development. The arguments raised at this meeting in favour of autonomous weapon systems are based on theoretical concepts of potential future technology. The concerns being raised in opposition are based on concrete ethical, legal, and technical challenges based on the information we have now about both technology and human behaviour.

The call for meaningful human control over individual attacks is perhaps most compelling in its merging of ethics and law. The principles of humanity referred to in the Martens clause and IHL in general are founded on human deliberation and reasoning. A machine, as philosophers and government experts alike have argued during this meeting, cannot understand the value of human life. In its closing statement, the International Committee of Robot Arms Control said that violent force must not be delegated to machines, noting that this would threaten human dignity, democracy, and human rights. A machine making targeting and attack decisions independent from human deliberation is, as Christof Heyns argued, by its nature inhumane—it undermines human dignity and the human rights to life and due process, among others. This blending of ethical and legal arguments makes a strong case that human begins must always retain meaningful control over each and every act of violence.

States will now move to adopt a new mandate for work on this issue at the CCW in November. While many issues remain to be explored, it is important to mandate a more formal group of experts to work over a longer period in 2015. A preemptive ban on fully autonomous weapons is necessary to ensure the retention of meaningful human control over targeting and attack decisions, which in turn is necessary to ensure that we uphold the principles of humanity as much as possible in the face of the already existing horrors of war and conflict.

[PDF] ()