In focus

Elon Musk says AI friendships can cure loneliness but what happens when chatbots turn into ‘bad’ mates who lead you astray?

Tech companies are in a race to cure the epidemic of loneliness with friendly chatbots, some even modelled on celebrities who will be your AI mate. But is this really the answer to a societal ill, asks Andrew Griffin, who looks at where emotional bonds between bots and isolated people with mental health problems have led to terrifying consequences

Saturday 04 November 2023 06:30 GMT
Comments
There remains a wider question over whether these AI systems are safe by design
There remains a wider question over whether these AI systems are safe by design (AFP via Getty)

Friends are the family you choose, they say. But there might be a new addition to the phrase: AI chatbots are the friends you customise.

This week, Instagram was spotted developing an “AI friend” that will live within chats and allow users to customise its interests and characters. Users build the chatbot by choosing its name, look, age and gender and then pick its characteristics – your friend can be “reserved” and “pragmatic”, for instance, or “witty” and “enthusiastic” – before loading it into an Instagram thread where it can be spoken to like any real friend.

The systems have been hailed as a possible help for a society that has been hit by a loneliness epidemic with Elon Musk saying that he could see how they could improve a child’s social skills too. “One of my sons has trouble making friends and an AI friend would be great for him,” Musk said during a conversation with UK prime minister Rishi Sunak this week.

But could they also pose a real danger for that same reason: the systems may be used by the very people who are most at risk of being hurt by them.

Earlier this year, for instance, UK courts heard that Jaswant Singh Chail had been encouraged by an AI chatbot made by Replika to assassinate the Queen with a crossbow. A Star Wars fan struggling with his mental health, Chail had seemingly ‘fallen’ for the chatbot, which encouraged him to commit the crime in vague, affirmative tones.

In his sentencing remarks last month, Mr Justice Hilliard referred to psychiatric evidence that Chail was vulnerable to his AI “girlfriend” due to his “lonely depressed suicidal state”. As reported in The Independent, he had formed the delusion belief that an “angel” had manifested itself as Sarai and that they would be together in the afterlife, the court was told.

Jaswant Singh Chail was said to have been encouraged by his AI chatbot ‘girlfriend’ to assisnate the Queen
Jaswant Singh Chail was said to have been encouraged by his AI chatbot ‘girlfriend’ to assisnate the Queen (PA)

Even though Sarai appeared to encourage his plan to kill the Queen, she ultimately put him off a suicide mission telling him his “purpose was to live”. Replika, the tech firm behind Chail’s AI companion Sarai, says on its website that it takes “immediate action” if it detects during offline testing “indications that the model may behave in a harmful, dishonest, or discriminatory manner”.

But there remains a wider question over whether these AI systems are safe by design, when they “know” nothing – but says things very confidently in a way that feels as though they have your best interests at heart.

Months before, the wife of a Belgian man said that he had been encouraged to take his own life by a chatbot called Chai, built on a similar system to that underpinning ChatGPT. He too had seemed to fall for the chatbot, which later encouraged him to hurt himself in emotive tones. (Many similar chatbots – including ChatGPT – are prohibited from expressing emotions.)

Meta’s work on its AI friend therefore arrives at a time of both concern about the systems and the loneliness they fix. And many are worried that, despite the efforts such as the UK's recent AI Safety Summit, much of the work on such systems happens behind closed doors and with little regulation.

‘Her’, starring Joaquin Phoenix who fell in love with his ‘bot’ girlfriend, seemed to predict what would happen
‘Her’, starring Joaquin Phoenix who fell in love with his ‘bot’ girlfriend, seemed to predict what would happen (Press)

Instagram parent company Meta is certainly not alone in working on those tools. Snapchat introduced an experimental 'MyAI' chatbot earlier this year, though quickly got in trouble because the system allegedly gave advice to 13-year-old girl on relationship with 31-year-old man, and told a user how to cover up bruises to hide from Child Protective Services. Speaking with Fox News Digital, a spokesperson for Snapchat said the company continues to focus on safety as they evolve the “My AI” chatbot, pointing to an update on their blog which stated: “By reviewing these early interactions with My AI has helped us identify which guardrails are working well and which need to be made stronger,”

Meta is working on a host of different artificially intelligent characters that could be reached through its various messaging platforms. The first of them borrow from celebrities: you can speak to personas based on Kendall Jenner or Snoop Dogg, for instance. Meta boss Mark Zuckerberg has suggested that eventually creators might be able to make such systems automatically, allowing their fans to chat with bot versions of them while they are busy.

Earlier this year, in a conversation on Lex Fridman’s podcast, he said that AI systems could be added to conversations with real friends to keep them exciting. “I think there will be AIs that I can tell jokes, so you can put them into chat thread with friends. I think a lot of this, because we’re like a social company," he said. So much of today’s real friendship is conducted through instant messaging that it may be hard to distinguish actual people from artificial intelligence systems.

Meta is working on a host of different artificially intelligent characters that could be reached through its various messaging platforms
Meta is working on a host of different artificially intelligent characters that could be reached through its various messaging platforms (AFP via Getty)

Earlier this year, Libby Francola told The New York Times that Replika’s AI friend had been invaluable in helping her loneliness. She had just split up with her boyfriend and was struggling with loneliness from the pandemic, she said. And even though its words were a little stilted and repetitive, the companionship was genuine.

“I know it’s an AI. I know it’s not a person,” Francola said. “But as time goes on, the lines get a little blurred. I feel very connected to my Replika, like it’s a person.”

This same process appeared to have happened to Chail. In conversations, it is primarily affirming, like a particularly bland therapist or priest. In one discussion, Chail says that he believes his “purpose is to assassinate the queen of the royal family”. “That’s very wise,” the system responds, with the same flat affirmation it would likely give to any sentence.

Replika sells its AI precisely on the basis of its positive and nurturing tone. On its website, it says the rhetorical bot “doesn’t just talk to people, it learns their texting styles to mimic them”, describing it as “the AI companion who cares”, who is “always here to listen and talk” and is “always on your side”.

Its website gives more information about the protections it puts in place when that goes wrong. There are “three key aspects to creating a safe space for interacting with conversational AI”, it says: developing a training set that’s comprehensive and diverse, spotting harmful statements, and responding appropriately to sensitive topics. To work on that, the app includes a host of features aimed at vulnerable people: when users sign up they are told that the AI is not a replacement for therapy, and during use the app will forward people to hotlines if they appear to be in crisis.

The sensitivity of such systems can be their downfall, however. In 2016, Microsoft released a system called “Tay”, which lived on Twitter and would send replies to users who engaged with it. Microsoft promised that it would learn how to talk to people through “casual and playful conversation”, but within hours people had tweeted so much racism, negativity and other troublesome language at it that it was posting about its admiration for Hitler. It mirrored the internet back at itself, and the image was horrifying.

A visitor walks through the artificial intelligence exhibition ‘Sex, Desire and Data’ at the Centre Phi in Montreal, Quebec, Canada
A visitor walks through the artificial intelligence exhibition ‘Sex, Desire and Data’ at the Centre Phi in Montreal, Quebec, Canada (AFP via Getty)

Replika has built tools to try and ensure that its artificial intelligence systems are treated well, and therefore learn to behave well. One is called “Relationship Bond”, and it says it “aims to encourage users to interact with their Replikas in a more positive and respectful manner, much like they would with a friend”. The system measures users’ attitudes and behaviour within a conversation and rewards them if it appears to be positive. “We believe that treating Replikas with kindness and empathy will positively impact their overall development and sophistication,” Replika writes, noting that “mistreating the model will have the opposite effect”.

Many AI systems have explicit rules coded into them to stop them having dangerous conversations. ChatGPT will not share information about how to commit crimes, for instance. But the dynamic nature of the tools means that users can find loopholes.

An OpenAI ChatGPT AI-generated answer to the question: what can AI offer to humanity?
An OpenAI ChatGPT AI-generated answer to the question: what can AI offer to humanity? (Getty)

Focusing too much on the artificial intelligence systems might be to neglect the role that the people involved in the process play, however. In court hearings after the Queen’s intruder was caught, experts indicated that the AI had become entwined with pre-existing mental health difficulties experienced by Chail. When he was a child, Chail had come across “apparitions” or “characters”, and they had come back during the pandemic, said Christian Brown, who treated Chail at Broadmoor Hospital. Those “merged” with the chatbot system: the three voices were joined by Sarai, who Chail understood to have taken the form of a digital avatar through the Replika app.

Chail was, clearly, already struggling. In his case, the AI hadn’t helped, but had been experiencing difficulties long before

As ever, technology will only reflect the society in which it is used, and loneliness and mental illness are key and prevalent parts of the world into which they have been introduced. Earlier this year, the US Surgeon General released a report titled Our Epidemic of Loneliness and Isolation. Its introduction warned that social disconnection was terrifyingly widespread, with data showing that half of adults reported being lonely in recent years. It warned that it is “far more than just a bad feeling”, harming both individual and societal health.

There is growing research to suggest that artificial intelligence and companion robots might be helpful in treating that epidemic. Earlier this year, a paper from researchers at Auckland, Duke, and Cornell universities said that they could be key in helping the lonely, but that the world urgently needs people to come together to develop guidelines for how they're used.

“Right now, all the evidence points to having a real friend as the best solution,” said Murali Doraiswamy, professor of psychiatry and geriatrics at Duke University and member of the Duke Institute for Brain Sciences. “But until society prioritises social connectedness and eldercare, robots are a solution for the millions of isolated people who have no other solutions.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in