logo_reaching-critical-will
   

Share

17 April 2015, Vol. 2, No. 5

Editorial: Rights and responsibilities of humanity
Ray Acheson | Reaching Critical Will of WILPF 


Download full edition in PDF

Over the past week the potential technical capacities and lawfulness of autonomous weapons have been extensively considered. Questions have included, can autonomous weapons make the necessary distinctions for targeting? Can autonomous weapons be programmed to act in conformity with international law? Yet the most fundamental question is neither technical nor legal, but rather ethical: should humans delegate power over life and death to machines? What are the implications for human rights and more broadly, for humanity?

These questions are critical to determining whether or not autonomous weapons should be developed. Looking at the “lawfulness” of an attack, with any weapon, is insufficient for determining its acceptability, argued Christof Heyns, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions. An action may be lawful under international humanitarian law (IHL) but a commander may decide not to undertake it on the basis of other considerations, including morality or ethics. Yet this option would likely not be available with an autonomous weapon.

Further, even those who could be lawfully targeted under IHL still have a right to dignity, which would be undermined by the use of autonomous weapons. The US delegation argued that violence to human dignity is inherent in armed conflict, regardless of what weapon is used. This is true, yet there is something particularly undignified—and inhumane—in being subjected to machine-based violence, the challenges of which were discussed in yesterday’s CCW Report.

All wars undermine human dignity, said the Holy See, but autonomous weapons would only elevate dehumanisation. This is not just an issue for those being killed by machines but also those allowing machines to perform these actions on their behalf. Their dignity is also at stake, argues Heyns. If a machine makes “decisions” of when and how and where to use force, are the humans involved in its deployment still moral agents exercising moral force? “The taking of life requires accountability, human accountability, for our actions, determined by morality and law,” argued WILPF at the beginning the week. “Without that we shirk our responsibilities and betray our common humanity.”

The loss of accountability is another important consideration. In a recent report, Human Rights Watch found that programmers, manufacturers, and military personnel could all escape liability for unlawful deaths and injuries caused autonomous weapons. Bonnie Docherty from the Arms Division of HRW highlighted the significant hurdles to assigning personal accountability for the actions of autonomous weapons under both criminal and civil law. “No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party,” she argued.

The use of force has already become too disengaged from moral reasoning, effective oversight, or accountability. The relationship between accountability and armed violence has been increasingly undermined through policies and practices of states that grant themselves “an overly generous licence to kill,” as anthropologist Hugh Gusterson put it. In a recent article in the Bulletin of Atomic Scientists he argued that “assassination, combat, and law enforcement have become jumbled together” through counterinsurgency programmes, the use of armed drones, and militarisation of the police. This is extremely relevant in the debate about autonomous weapons. Heyns, several delegations, Human Rights Watch, and Amnesty International raised concerns about the use of autonomous weapons outside of established armed conflicts and the implications of this for human rights law and rules of engagement for police forces.

This is not an abstract concern. The United States, for example, has for the last fifteen years treated the world as a battlefield, including its own territory. US police have reportedly killed an average of 928 people a year over the last eight years—which is twice as many as the highest estimate of reported civilian deaths from drone strikes in Pakistan, Yemen, Somalia, and Afghanistan combined, Gusterson notes.

There is reason to believe that autonomous weapons would be likely to only increase opportunities for military intervention, extrajudicial assassinations, and terrorisation of home populations through law enforcement. Many delegations and speakers throughout the week have raised concerns that the availability of such weapons would risk lowering the threshold for using armed violence due to the low risk to the deploying force. Heyns has argued in a report on armed drones that these semi-autonomous weapons provide the opportunity for states to engage “in low-intensity but drawn-out applications of force that know few geographical or temporal boundaries.” He argues that this contradicts “the notion that war—and the transnational use of force in general—must be of limited duration and scope, and that there should be a time of healing and recovery following conflict.”

All of these considerations have serious implications for where we go from here.

Earlier this week, some panelists and delegations indicated that weapons reviews, such as those mandated by article 36 of Additional Protocol I of the Geneva Conventions, would be sufficient to determine the legality of autonomous weapons. But article 36 reviews are conducted within the framework of IHL. It is not clear how such reviews would address human rights law.

Some delegations have also suggested that looking at issues of transparency would be the best next step for the autonomous weapons issue in the context of the CCW. Transparency, of course, is critical to moving forward with building collective understandings and approaches to constraining the development and use of tools of violence. Understanding what technologies currently exist and what might be under development would be helpful. But at the moment we have an opportunity to prevent the development certain technologies. Transparency is an element of this preventative effort but it is not the solution itself.

Heyns argued that from the perspective of human rights, weapons operating without meaningful human control must be prohibited. This is consistent with the arguments made by civil society organisations that are part of the Campaign to Stop Killer Robots, including WILPF. A ban on autonomous weapons is necessary to ensure the retention of meaningful human control over targeting and attack decisions.

States should take the time between now and the CCW meeting in November 2015 to figure out how best to move forward. They should plan to set aside significant time next year, perhaps through a group of governmental experts, to examine critical issues including meaningful human control and appropriate policy and legal responses. Such activities should be oriented towards establishing a negotiating mandate at the CCW Review Conference in 2016. Moving forward progressively and swiftly on this issue is imperative, while we still have the chance to prevent the development of autonomous weapons and uphold basic principles of humanity.

[PDF] ()