Government and private security forces around the world have employed countless brilliant thinkers to solve one of humanity’s most enduring questions: how quick and easy can we make it to slaughter one another? This is exactly the question on the minds of the thousands of people working on developing autonomous weapons for dozens of countries around the world.

During violent conflict, everyone suffers until human compassion finally wins out. But what if a robot is calling the shots? This is the question on policymakers’ minds as they decide when and how to regulate lethal autonomous weapons (LAWs).

Science Fiction Meets Science Reality

Discussing the threat of robot-on-human violence immediately calls to mind Skynet from the Terminator saga, but we’re well beyond that kind of tech-phobic paranoia. Hoards of murderous robots are becoming a throwback to the days when Motorola made the hottest cell phone on the market and Nintendo blew everyone’s mind with 16-bit video games. But Elon Musk just recently warned that AI could cause World War III, suggesting that we are “summoning the demon.”

Before we dismiss this as yet another ranting rave from a crazy rich dude, let’s remember that crazy rich dudes compose a plurality of the world’s political leaders. Vladimir Putin himself has expressed a belief that the nation that leads in artificial intelligence will be the “ruler of the world.” And in case you think this means we’re heading to a new age of technology-driven social advancement, it’s important to point out the fact that dozens of nations are actively working on finding ways to weaponize AI.

Weaponizing AI

They say that technology is not inherently good or bad, but rather it’s how new gadgets are put to use that determines their social and ethical impact. After all, nuclear weapons gave rise to peaceful nuclear power, and the Cold War made the internet possible. But also, there are at least thirty countries actively developing lethal autonomous weapons (LAWs)  robots designed to kill people.

LAWs include any type of technology — drones, smartbombs, automatic weapons, or even nanobots — that can independently select, find, and execute a target. But how self-directed can an autonomous machine be?

There are three categories of LAWs: human-in-the-loop, which only operates at a human’s command; human-on-the-loop, which can initiate and engage independently, but is subject to human override; and human-out-of-the-loop which would have zero human input or override capability. Human-out-of-the-loop autonomous weapons only exist in concept for the time being, but these categories serve as helpful markers in the debate about feasible and effective ways to regulate the use of LAWs.

Many weapons already common in modern armed conflict include some level of in-the-loop or on-the-loop technology. For example, AI-powered autonomous platforms like Israel’s Iron Dome and America’s Close-In Weapons System operate in a classic human-in-the-loop system. Importantly, however, almost no one wants to see a global permission slip for fully-automated, human-out-of-the-loop weapons. Even the Pentagon has directed that any development in LAWs must incorporate some human influence. But should we ban lethal autonomous weapons outright?

Problems With Sending Robots to War

Influential thought leaders like Elon Musk, the late Stephen Hawking, Noam Chomsky, and countless scientists and humanitarians have all called for an outright ban on LAWs. They believe that this technology is too dangerous to be meddled with, and a global moratorium on autonomous weapons is the only chance humans have for long-term survival. Those who have spoken out about using this technology focus on the fact that robots lack discernment, an essential quality of controlled warfare.

Combatants must distinguish between soldiers and civilians; failing to do so is a war crime. Likewise, the principle of jus in bello requires someone to be held responsible for civilian deaths during armed conflict. Robots aren’t above the law; LAWs must comply with these and other laws of armed conflict when acting autonomously. However, at the current state of technology it is unclear whether robots are even capable of making qualified distinctions. This begs the question of who is to blame if a lethal piece of AI makes a mistake.

Human laws regulating the practice of warfare are ancient, and existing legal mechanisms are ill-suited to protect us from the Pandora’s Box that has been thrown open by combining AI and lethal weapons. We could face another international arms race like we did during the nuclear era or run the risk that LAWs could get into the wrong hands. Plus — you know, Skynet.

The fact that autonomous weapons are not capable of discernment raises the very real danger that LAWs will fail to discriminate between combatants and civilians. This is the common wisdom behind the Skynet character in the Terminator movies and exactly why most parties to the debate around LAWs believe we should stick to human-on-the-loop, if not in-the-loop systems. Fair enough, those films are freaky. But any government policy that suppresses technological innovations inherently comes with unintended consequences.

Should Lethal Autonomous Weapons Be Outlawed?

Supporters of LAWs cite both military and ethical advantages to the use of autonomous killing technologies. The military advantages boil down to efficiency: killing more people, more quickly. They do this by saving money, multiplying force, and expanding the battlefield into places like space and deep underwater where humans cannot go. Unhindered by fatigue or exhaustion, robots could potentially be much more accurate and effective than human fighters, could be programmed to understand many languages, and could even assist with war strategy.

Although any argument revolving around ethical war practices involves a measurable degree of cognitive dissonance, there may also be ethical reasons behind the use of LAWs. Robots have no self-preservation instinct, so autonomous weapons would not make emotional judgments and reactions that could result in civilian death. And lest we forget the lives saved on the battlefield — LAWs have removed human beings from danger who would otherwise be in combat from the potential trauma of warfare. Meaningful arguments ring loudly on both sides of this debate, so what’s next for LAWs?

The Future of LAWs

As new voices join the LAWs regulation debate, one thing is certain: this technology is not going anywhere. An outright ban may be unwise, but that fact is irrelevant because it is unenforceable. Government and private defense forces will continue to develop LAW, and there is a reasonable basis to do so. But then the question becomes exactly how we will regulate this increasingly dangerous artificial intelligence.

Whether and how to regulate LAWs is a policy debate muddied by interwoven legal, technological, military, and ethical considerations. Fortunately, some pre-existing international laws — such as the Ottawa Treaty on anti-personnel mines, conventions on chemical and biological weapons, and the 1970 nuclear weapons treaty — offer effective starting points.

Research and development bans are largely unenforceable, but we may soon see laws limiting the deployment and mass production of AI-powered lethal weapons. But of course, we’re only talking about LAWs that function with human control; there is little doubt that a fully-autonomous robot built to kill in a manner that escapes human intervention would be a serious problem.

Banning Human-Out-Of-The-Loop Warfare

Plenty of things in science fiction have matured into science reality, and that’s exactly what Elon Musk and Stephen Hawking were concerned about. Without a doubt, world leaders should begin working on coming to a consensus regarding on a ban on human-out-of-the-loop weapons. Fully-autonomous “killing machines” should be fully off the table before anyone tries to define rules regulating the varying types of LAWs. From there, leaders can decide whether and how to employ artificial life to effectuate anthropological death in a manner that maintains the human capacity for discernment and empathy that we both value as an ethical prerogative and demand as a matter of international law.