Tech

Why we should ban killer robots

Oct 20, 2015 /

Around the world, right now, several countries are developing autonomous weapons that use artificial intelligence to locate, track and destroy their targets. AI professor Toby Walsh explains why that’s a problem.

Artificial intelligence is in the news a lot, and it’s safe to say that it’s not always benign. That’s why the United Nations is hosting a debate on offensive autonomous weapons — and that’s why thousands of my colleagues working in AI and robotics recently came together to sign an open letter calling for a ban on these so-called killer robots. Yet somehow, not everyone is on board with the idea that the world would be a better place with a ban in place. “Robots will be better at war than humans,” they say. “Let robot fight robot and keep humans out of it.” Yet these arguments don’t stand up to scrutiny. Here are the five main objections I hear to banning killer robots — and why they’re misguided.

Objection 1. Robots will be more effective than humans.
They’ll be more efficient for sure. They won’t need to sleep. They won’t need time to rest and recover. They won’t need long training programs. They won’t mind extreme cold or heat. All in all, they’ll make ideal soldiers. But they won’t be more effective. The recently leaked Drone Papers suggest nearly nine out of ten people killed by drone strikes weren’t the intended target. This is when there’s still a human in the loop, making the final life-or-death decision. The statistics will be much worse when we replace that human with a computer. Killer robots will also be more efficient at killing us. Terrorists and rogue nations are sure to use them against us. It’s clear if they’re not banned that there will be an arms race. It is not overblown to suggest that this will be the next great revolution in warfare after the invention of gunpowder and nuclear bombs. The history of warfare is largely one of who can more efficiently kill the other side. This has typically not been a good thing for mankind.

Objection 2. Robots will be more ethical.
In the terror of battle, humans have committed many atrocities. And robots can be built to follow precise rules. However, it’s fanciful to imagine we know how to build ethical robots. AI researchers like myself have only just started to worry about how you could program a robot to behave ethically. It will takes us many decades to work this out. And even when we do, there’s no computer we know that can’t be hacked to behave in ways that we don’t desire. Robots today cannot make the distinctions that the international rules of war require: to distinguish between combatant and civilian, to act proportionally, and so on. Robot warfare is likely to be a lot more unpleasant than the war we fight today.

Objection 3. Robots can just fight robots.
Replacing humans with robots in a dangerous place like the battlefield might seem like a good idea. However, it’s also fanciful to suppose that we could just have robots fight robots. There’s not some separate part of the world called “the battlefield.” Wars are now fought in our towns and cities, with unfortunate civilians caught in the crossfire. The world is sadly witnessing this today in Syria and elsewhere. Our opponents today are typically terrorists and rogue nations. They are not going to sign up to a contest between robots. Indeed, there’s an argument that the terror unleashed remotely by drones has likely aggravated the many conflicts in which we find ourselves today.

Objection 4. Such robots already exist and we need them.
I am perfectly happy to concede that technologies like the autonomous Phalanxanti-missile gun are a good thing. You don’t have time to get a human decision when defending yourself against an incoming supersonic missile. But the Phalanx is a defensive system. And my colleagues and I did not call for defensive systems to be banned. We only called for offensive autonomous systems to be banned. Like the Samsung sentry robot currently active in the DMZ between North and South Korea. This will kill any person who steps into the DMZ from four kilometers away with deadly accuracy. There’s no reason we can’t ban a weapon system that already exists.

Objection 5. Weapon bans don’t work.
History would contradict this argument. The 1998 UN Protocol on Blinding Lasers resulted in blinding lasers, designed to cause permanent blindness, being kept out of the battlefield. If you go to Syria today — or any of the other war zones of the world — you won’t find this weapon, and not a single arms company anywhere in the world will sell it to you. You can’t uninvent the technology that supports blinding lasers, but there’s enough stigma associated with them that arms companies have stayed away. I hope the same will be true of autonomous weapons. We won’t be able to uninvent the technology, but we can put enough stigma in place that robots aren’t weaponized. Even a partially effective ban would be likely worth having. Anti-personnel mines still exist today despite the 1997 Ottawa Treaty. But 40 million such mines have been destroyed. This has made the world a safer place and resulted in many fewer children losing their life or a limb.

AI and robotics can be used for many great purposes. Much the same technology will be needed in autonomous cars, which are predicted to save 30,000 deaths on the roads of the United States every year. It will make our roads, factories, mines and ports safer and more efficient. It will make our lives healthier, wealthier and happier. And in the military context, robots can be used to clear minefields, bring supplies in through dangerous routes, and shift mountains of signal intelligence. But they shouldn’t be used to kill.

Photo by Flickr user arwcheek. Watch Toby Walsh’s talk from TEDxBerlin: How can you stop killer robots.