Loading...
Global Challenges
Issue no. 5 | April 2019
New Grammars of War: Conflict and Violence in the 21st Century
Global Challenges
Issue no. 5 | April 2019
New Grammars of War | Article 4

Welcome to the World of Killer Robots

Reading time: 5 min

Although cyborg assassins won’t be arriving from the future anytime soon, offensive “Terminator-style” autonomous robots that are programmed to kill could soon escape Hollywood science fiction and become reality. This actual rise of the machines raises important strategic, moral, and legal questions about whether the international community should empower robots to kill, and if so, what could be the consequences and how to regulate them.

This debate goes well beyond drones. Existing armed unmanned aerial vehicles are precursors to autonomous military robotics – that is, killer robots – that could choose targets without further human intervention once they are programmed and activated. Wars fought by killer robots are no longer hypothetical.

The technology is nearly here for all kinds of machines, from unmanned aerial vehicles to nanobots to existing weapon systems with autonomy in many tasks and functions, to eventually, humanoid Terminator-style robots. This technology is already proliferating, driven mostly by the commercial interests of mega companies like Facebook and Google, but also defence contractors and governments, rather than by strategic calculations of potential risks. And innovation is picking up. Indeed, China, Israel, Russia, the United Kingdom, the United States, and other states have plans to further develop their robotic arsenals, including killer robots.

How would a robot decide if it is proportional to strike a target if the attack would also kill children in a school next door? Several countries have already deployed forerunners of killer robots. The Samsung Techwin security surveillance guard robots, which South Korea uses in the demilitarised zone it shares with North Korea, can detect targets through infrared sensors. Although they are currently operated by humans, the robots have an automatic feature that can detect body heat in the demilitarised zone and fire with an onboard machine gun without the need for human operators. The US firm Northrop Grumman has developed an autonomous drone, the X-47B, which can travel on a preprogrammed flight path while being monitored by a pilot on a ship. Israel, meanwhile, has developed an armed drone known as the Harop that can select targets on its own with a special sensor, after loitering in the skies for hours.

Questions

Militaries insist that such hardware protects human life by taking soldiers and pilots of the technologically powerful countries out of harm’s way. But the risk of malfunctions from failed software or cyber attacks could result in new dangers altogether. Countries will have dissimilar computer programmes that, when interacting with each other, may be erratic. Further, signal jamming and hacking become all the more attractive – and more dangerous – as armies increasingly rely on drones and other robotic weaponry. According to killer robot advocates, removing the human operator could actually solve some of those problems, since killer robots could ideally operate without touching communication networks and cyberspace. But that wouldn’t help if a killer robot were successfully hacked and turned against its home country.

The use of robots also raises an important moral question. As Noel Sharkey, a robotics expert, has asked: “Are we losing our humanity by automating death?” Killer robots would make war easier to pursue and declare, given the distance between combatants and, in some cases, their removal from the battlefield altogether. Automated warfare would reduce long-established thresholds for resorting to violence and the use of force, which the United Nations (UN) has carefully built over decades. Those norms have been paramount in ensuring global security, but they would be easier to break with killer robots, which would allow countries to declare war without having to worry about causing casualties on their own side. There are also other hard realities to consider. Although killer robots might be used to wage war without putting soldiers in harm’s way, nations might use them to terrorise their own citizens or those of neighbouring countries. Put simply, such weapons could increase the risks to civilians.

Limitations

Four branches of international law have historically been used to constrain violence in war: the law of state responsibility, the law on the use of force, international humanitarian law, and human rights law. As they are currently conducted, drone strikes violate all of them. Four branches of international law have historically been used to constrain violence in war. As they are currently conducted, drone strikes violate all of them.  Killer robots would likely only continue the trend. International humanitarian law mandates that the use of violence must be proportional and avoid indiscriminate damage and killings. But killer robots will be unable to satisfactorily evaluate proportionality and precision: according to scientists at the International Committee for Robot Arms Control, a nongovernmental organisation, the hard decisions of proportionality have to be weighed in dynamic environments that require highly qualitative and subjective knowledge – just the things that robots could lack.

How would a robot decide if it is proportional to strike a target if the attack would also kill children in a school next door? Terrorists and insurgents often use human shields, or coerce civilians and non-combatants into situations in which they could appear to be combatants from afar. Automatic target recognition can detect a tank only in an uncluttered environment, such as a desert. Vision systems cannot distinguish between a combatant and a child. Sensory processing systems will improve with time, but it is unlikely that the type of reasoning to determine details or even the legitimacy of targets will be available in the foreseeable future.

For all of their own faults, therefore, humans must be kept in the loop to oversee targets, authorise attacks, and override potential judgment calls as the operation evolves. Battles are too unpredictable to let robots take over and all law governing war is human-based. They might be effective killing machines, but therein lies the danger. Of course, we might be able to build robots that are capable of making such judgments in the future. But that possibility is all the more reason to prevent killer robots from being developed at all. Just as threatening, if not more so, than a killer robot that can’t discern civilians from combatants is a robot that can make complex decisions about who it wants to kill or whose algorithms and software can interact poorly and unpredictability in combat.

Regulation

In the last few decades, international law has been able to rein in abuses that come with new military technologies, from chemical and biological weapons to landmines, blinding laser weapons, and cluster bombs. After the Biological Weapons Convention in the 1970s (the first multilateral disarmament treaty to ban an entire class of weapons), most countries have embraced efforts to prohibit landmines and cluster munitions, winning their battles with the 1997 Mine Ban Treaty, the 2008 Convention on Cluster Munitions, and the 2013 Arms Trade Treaty, the first global legal agreement on the transfer of conventional arms.

Launched in 2013, the Campaign to Stop Killer Robots, an international coalition of nongovernmental organisations, has gathered supporters more quickly than any other disarmament movement. The Campaign is an initiative represented by 93 NGOs in 53 countries, which has gathered support from 30 governments for a ban in the last years since 2014 when states first started discussing the matter at the UN. The Campaign has done a brilliant job steering member states to do the right thing. However, member states have been slow and the United States and Russia, along with others in veiled acquiescence, are dragging progress to enact new global governance. Other initiatives seem to energise the process nevertheless.

At the Paris Peace Forum in 2018, UN Secretary-General António Guterres called for a ban on killer robots, stating, “For me there is a message that is very clear – machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law.” On 5 July 2018, the European Parliament adopted a resolution calling for the urgent negotiation of “an international ban on weapon systems that lack human control over the use of force”. The international community should work now to prohibit machines capable of killing on their own. Killer robots might seem like an unreasonable idea, but they could become an unacceptable reality. There is a small window of opportunity. Now is the time to use it.

By Denise Garcia
Associate Professor of Political Science and International Affairs, Northeastern University, Boston, USA

Share this article:

This article draws from Denise Garcia, “The Case against Killer Robots: Why the United States Should Ban Them”, Foreign Affairs, 10 May 2014, https://www.foreignaffairs.com/articles/united-states/2014-05-10/case-against-killer-robots.

Header image caption: Giant evil robot destroying the city

Map based on the data produced by the Small Arm Survey, and enriched by the Graduate Institute's Research Office in Geneva, in collaboration with whybe.ch.

Migration, Violence and War, with Charles Heller

Graduate Institute - Research Office

State-Based Conflict since 1946

© ourworldindata.org / CC - Creative Commons
Source: Uppsala Conflict Data Program (UCDP)
Variable description: Ongoing conflicts are represented in each year in which more than 25 deaths occurred.

Violence, Killer Robots and Regulation, with Paola Gaeta

Graduate Institute - Research Office

Terminology Related to Violence and Conflict

Violence

The intentional use of physical force or power, threatened or actual, against oneself, another person, or against a group or community, that either results in or has a high likelihood of resulting in injury, death, psychological harm, maldevelopment, or deprivation.
The World Health Organization (WHO)

Conflict

Derived from the Latin word conflictus, which means collision or clash. This term is understood as a disagreement between two or more parties through which the parties involved perceive a threat to their needs, interests or concerns. Source

Armed conflict

A dispute involving the use of armed force between two or more parties, often referred to as war. Source

Interstate conflict

Militarised armed conflict between two or more states. Source

Intrastate conflict / Civil War

A conflict between a government and one or several non-governmental parties, often with interference or support from foreign actors.

Armed violence

The intentional use of illegitimate force (actual or threatened) with arms or explosives, against a person, group, community, or state that undermines people-centred security and/or sustainable development. Source

Gender-based violence

Violence that is directed against a person on the basis of gender or sex. It includes acts that inflict physical, mental, or sexual harm or suffering, threats of such acts, coercion, or other deprivations of liberty. While women, men, boys and girls can be victims of gender-based violence, because of their subordinate status, women and girls are the primary victims. Source

 

Armed conflicts according to international law/IHL

IHL distinguishes between Non-international armed conflict defined as "A conflict in which government forces are fighting with armed insurgents, or armed groups are fighting amongst themselves" and International armed conflict defined as "A war involving two or more States, regardless of whether a declaration of war has been made or whether the parties recognize that there is a state of war. Source

Terrorism

A criminal act or acts intended to inflict dramatic and deadly injury on civilians and to create an atmosphere of fear, generally in furtherance of a political or ideological (whether secular or religious) purpose.  Terrorism is most often carried out by sub-national or transnational groups, but it has also been known to be practiced by rulers as an instrument of control. Source

Research Office

To Top