Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Google fills 'concrete' AI weapons policy with caveats

Analysis: Chief executive Sundar Pichai faced an internal revolt about tech giant's ties with US military 

Anthony Cuthbertson
Friday 08 June 2018 17:08 BST
Comments
Google CEO Sundar Pichai has refused to commit to ending the firm's relationship with the US military
Google CEO Sundar Pichai has refused to commit to ending the firm's relationship with the US military

When Google quietly removed almost all mentions of its famous ‘Don’t be evil’ slogan from its code of conduct earlier this year, the technology giant was in the midst of an internal revolt about its ties with the US military.

The firm was working on the controversial Project Maven program - an artificial intelligence (AI) project that analyses imagery and could be used to enhance the efficiency of drone strikes.

More than 3,100 employees signed an open letter in April that stated: “We believe that Google should not be in the business of war… We cannot outsource the moral responsibility of our technologies to third parties.”

It went on to demand that Google “draft, publicise and enforce a clear policy” surrounding its AI policy.

Around a dozen employees had already resigned in protest of the relationship, citing ethical concerns that autonomous weapons were in direct contradiction of Google’s "Don’t be evil" motto.

This week the tech giant's chief executive Sundar Pichai responded by unveiling his company’s “concrete standards” surrounding AI. However, some have suggested that the AI Principles, appear more porous than Mr Pichai’s language would seem to suggest.

Mr Pichai begins by prefacing the seven-point list of “objectives for AI applications” by saying it is by no means fixed or solid. “We acknowledge that this area is dynamic and evolving,” he says, adding that whatever principles are included are subject to change due to the company’s “willingness to adapt” its approach.

The points listed appear open to interpretation – a significant contrast to other principles put forward on the use of AI. For decades, the "Three Laws of Robotics" by the science fiction writer Isaac Asimov were the cornerstone for the ethical development of artificial intelligence. First set out in 1942, the first law stated simply: “A robot must not harm a human through action or inaction.”

This idea was elaborated on last year in the Asilomar AI Principles, developed by academics and ethicists as guidelines for anyone working in the field of artificial intelligence. Those rules state: “AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.”

But beyond the complexity of Google’s offering on the subject, the most notable caveat to the company’s AI principles comes towards the end of the 1,000-plus word document.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” Mr Pichai says. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”

This leaves Google’s relationship with the US military wide open, even though Project Maven was recently shut down.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in