top of page

Rise of the Killer Robots - A Cause for Concern?


Could your Roomba be trying to murder you in your sleep? Probably not, but recent research into drone technology is bringing us closer and closer to a world of autonomous killing machines. Is this a good step forward?

On the one hand, every soldier replaced by a robot is one less soldier that could be killed or maimed in action, but, on the other hand, could we trust a robot to only kill its designated targets? Could reducing the number of human troops going to battle in favour of robots make war an easier prospect for nations? Who would be responsible for the actions of these robots? It’s a complex question, with no real yes or no answer at the moment.


Credit: Oli Scarff

So let’s not talk about robots, with complex AI, decisions to make and billions of pounds of R&D behind them. Let’s talk about another device of war that one could, conceivably, put many of the same ethical questions to: the landmine.

Just like a robot, a landmine is, essentially, an autonomous killing machine. Place it somewhere, and forget about it; hope that an enemy steps on it and dies, no human input necessary. Just like robots, landmines reduce the number of soldiers you need on a battlefield; why have soldiers to set an ambush for an enemy, when a landmine on the road works just as well. Why drop thousands of soldiers into a warzone, when you can just seed the jungle with landmines and wait for the enemy to stumble onto them. Just like a robot is meant to be, a landmine is incapable of distinguishing between an enemy combatant, and a civilian.

The difference is, landmines as we know them were first brought to the theatre of war in 1862, and are a common feature of modern battlefields. While certain types have been banned, in the Ottawa Treaty of 1997, the vast majority of large mines are still very much in use. While there are ethical problems with their use, that is pretty much contained to the indiscriminate use of landmines amongst civilians. It is worth pointing out that neither of the three main superpowers (The US, Russia, or China) signed this treaty. There was no international outcry, no large petitions, no vocal marches, no personalities decrying this omission. And yet there is with these robots.

The Campaign to Stop Killer Robots says, on its ‘The Problem’ article, that ‘...fully autonomous weapons would not meet the requirements of the laws of war’. However, their reasons, and the reasons of the source they give, could very well be applied to a bullet mid-flight. The reasons they give boil down to autonomy, as they should. Using their logic, any kill that is not 100% enacted and completed by a human on the scene is unethical. I imagine that the same arguments were put forward when the first cannons were shown off: “It reduces the number of men I need to storm a castle! Could I therefore get away with more wars? If I fire the cannon, and debris hits a priest, is it my fault? Is it morally right for me to use a mortar to kill someone, rather than going up to them in a fair fight?”

As you may have guessed, the author’s opinion is that there is no fundamental ethical difference between new advances in military science, i.e., robotics, and past advances in military science, i.e. landmines, or cannon. Of course, this is not the only argument, there are many out there; the UN Special Report discussing this issue, while the author disagrees with the conclusions, is quite balanced, and is well worth the read.

Lastly, banning ‘killer’ robots outright, whether you agree with the reasons that are currently given or not, is a short-sighted idea. Banning something removes the possibility of it ever improving. Imagine banning all cars because the first one kept breaking down, or banning all surgery, because early practices caused more deaths than they saved. Only through careful debate, discussion and improvement can a proper answer be resolved.

 
 
 

Comments


Join our mailing list

Never miss an article!

bottom of page