The UK is opposing an international ban on lethal autonomous weapons systems (Laws) at a United Nations conference this week, so-called 'killer robots' that can select and destroy targets without human input.
Foreign Office and Ministry of Defence experts are over in Geneva for a week-long discussion over the use of computing and AI in combat, as the Campaign to Stop Killer Robots, an alliance of scientists and human rights activists, calls for autonomous weapons to be banned.
"At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area," a Foreign Office spokesperson told The Guardian.
"The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems."
The same cannot be said for Israel's Iron Dome, the US's Phalanx and more, which automatically respond to threats.
The conference will also question whether emotionless machines may be advantageous in combat situations as they do not feel fear, hate or have a sense of morality.
It comes at a time of increasing concern over artificial intelligence running away from its human creators, with Google recently patenting robots with personalities that can be imbued with, amongst other things, 'fear and derision'.
"Giving machines the power to decide who lives and dies on the battlefield is an unacceptable application of technology," said the Campaign to Stop Killer Robots. "Human control of any combat robot is essential to ensuring both humanitarian protection and effective legal control."
Join our new commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies