Google fires software engineer who claimed its AI had become sentient and self-aware
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Google has fired a software engineer who claimed its artificial intelligence had become self-aware and sentient.
Blake Lemoine was placed on leave at the company last month, after he said publicly that he believed Google’s LaMDa chatbot was a person.
Now Google said he had been permanently dismissed from the company, and claimed he had violated its policies. It also said that his claims about the chatbot were “wholly unfounded”.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said in an email to Reuters.
Mr Lemoine had been insistent that the artificial intelligence system had gained personhood and was self-aware. He published a number of articles on the topic, including logs of his conversations with the chatbot.
He said that he had asked Google to give the chatbot a number of rights, and for it to be treated as a proper employee of the company. Mr Lemoine said his requests were being made on behalf of the chatbot.
AI experts had been largely sceptical of Mr Lemoine’s claims, denying that any of the public evidence suggested that the system was self-aware or should be treated as a person. Experts suggested that the system was instead just a very convincing chatbot, and had been trained using the internet to use language in similar ways as humans.
Google also denied the claims, and insisted that Mr Lemoine’s sharing of the conversations and other data was in breach of its confidentiality agreements.
Mr Lemoine did not comment on the dismissal. But on Twitter he pointed to an article he had published in June, claiming he could soon be fired for “doing AI ethics work”, and said that he had “totally called this”.
He had worked Google for seven years before he was placed on leave, as part of the company’s “Responsible AI” group.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments