The Story: A new report highlights the attempts to ban “killer robots.” Here’s why Christians should be leading that effort.
The Background: In 2013, the international non-governmental organization Human Right Watch (HRW) helped launch the Campaign to Stop Killer Robots to “ban fully autonomous weapons and thereby retain meaningful human control over the use of force.” HRW has previously been effective in leading campaigns against weapons of warfare, having shared in the 1997 Nobel Peace Prize as a founding member of the International Campaign to Ban Landmines and having played a leading role in the 2008 treaty banning cluster munitions.
This week HRW released a 55-page report, “Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control,” which reviews the policies of the 97 countries that have publicly elaborated their views on killer robots since 2013.
About 30 countries have called for an international ban on lethal autonomous weapons systems (LAWS), colloquially known as “killer robots.” However, a small number of countries—most notably the United States and Russia—have blocked efforts to regulate such weapons systems.
“It’s abundantly clear that retaining meaningful human control over the use of force is an ethical imperative, a legal necessity, and a moral obligation,” said Mary Wareham, arms division advocacy director at Human Rights Watch and coordinator of the Campaign to Stop Killer Robots. “All countries need to respond with urgency by opening negotiations on a new international ban treaty.”
The international community does not currently have an agreed definition of what constitutes a lethal autonomous weapon system. The U.S. Department of Defense defines LAWS as “weapon system[s] that, once activated, can select and engage targets without further intervention by a human operator.” LAWS differ from most weapons systems in that no human operator either acquires or engages a specific target. (Some currently used weapons systems—including so-called “fire and forget” weapons, such as certain types of guided missiles—are semi-autonomous since they only become autonomous after the target has been chosen by a human.)
Why It Matters: Lethal autonomous weapon systems are a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ a weapon system to engage and destroy the target without manual human control of the system. While the creation of such technology remains about 15 years in the future, the principle for banning killer robots is found 1,500 years in our past.
The Christian tradition of just war theory began in the fifth century with Augustine. Augustine’s view of justice in warfare can be summed up by his statement, “We do not seek peace in order to be at war, but we go to war that we may have peace. Be peaceful, therefore, in warring, so that you may vanquish those whom you war against, and bring them to the prosperity of peace.” Since then the tradition, rooted in Romans 13:3-4, developed into three main areas: jus ad bellum (the moral requirement for going to war), jus in bello (the moral requirements for waging war), and jus post bellum (moral requirements after warfare is concluded).
The debate about LAWS falls under the second area, jus in bello, the criteria for justly engaging in warfare. Historically, Christian thinkers have proposed a primary criterion—discrimination—that is relevant to the use of killer robots.
The criterion of discrimination includes two key components, “innocence” and “deliberate attack.” The first rule of just warfare is that we do not target or kill the innocent. In this context, the term innocence refers to whether individuals are able to cause direct harm—whether willingly or reluctantly—either to civilians or to military forces engaged in just warfare. Such people are considered “noncombatants” and are immune from attack because the meet the qualification of innocence.
As the late Christian ethicist Jean Bethke Elshtain explained, “Discrimination refers to the need to differentiate between combatants and noncombatants. Noncombatants historically have been women, children, the aged and infirm, all unarmed persons going about their daily lives, and prisoners of war who have been disarmed by definition.” Lubomir Martin Ondrasek adds that it is important to note that Elshtain’s understanding of this criterion underscores that civilians can never be intentionally targeted by countries in war.
The second component of discrimination is “deliberate attack.” While the innocent may be harmed because of our engaging in warfare, it must not be our intention. In their book The Just War Tradition: Ethics in Modern Warfare, Charles Guthrie and Michael Quinlan contend that to meet this standard the death of innocents must be an unwelcome side effect rather than an intentional targeting and that we must do all that we reasonably can, consistent with not gravely endangering the legitimate military purpose, to minimize the risks of noncombatants to a minimum.
While humans may fail to adequately apply the criterion of discrimination, we have been created by God with the necessary capabilities (such as a functioning moral sense) to determine what constitutes a genuine threat. The same is not true—and cannot be true—for weapons operated by artificial intelligence. We are not able to produce pattern-matching algorithms that have the capability to distinguish between an enemy threat and an innocent noncombatant. Why then would we allow the militaries of the world to pretend otherwise?
Throughout the 20th century, evangelicals had an unfortunate tendency of only recognizing moral threats—especially threats posed by technology—after they had become ubiquitous. That seems to be changing, though, at least in the area of artificial intelligence. A prime example can be seen in the Southern Baptist Convention’s Ethics and Religious Liberty Commission on Artificial Intelligence: An Evangelical Statement of Principles. In the section on war and AI, the statement says, “Any lethal action conducted or substantially enabled by AI must employ human oversight or review” and, “We deny that human agency or moral culpability in war can be delegated to AI.”
This should be the position of all evangelicals and other Christians who subscribe to the biblically informed ethics of the just war tradition. Indeed, rather than relying on secular NGOs to lobby governments to oppose such weapons on moral grounds, Christians should be leading the charge. That is why we should act now. We can’t wait for the AI-directed shooting to start before we recognize we should have led the ban on killer robots.
Read More
The Gospel Coalition