Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

One of the key ways that we protect people from AI could make things worse, study says

Reminding people that they are talking to a chatbot could actually exacerbate mental distress, researchers warn

(AFP via Getty Images)

One of the key ways of trying to minimise the harms of artificial intelligence on our mental health could actually make things worse, a new study has warned.

Amid widespread concern about how chatbots could contribute to mental distress or even psychosis, one suggestion has been that chatbots should regularly remind their users that they are not a person, and that they are talking to a chatbot.

But now researchers have argued that could actually make the harm worse, by exacerbating the mental distress of people who are already vulnerable.

“It would be a mistake to assume that mandated reminders will significantly reduce risks for users who knowingly seek out a chatbot for conversation,” said and public health researcher Linnea Laestadius of the University of Wisconsin-Milwaukee, in a statement. “Reminding someone who already feels isolated that the one thing that makes them feel supported and not alone isn’t a human may backfire by making them feel even more alone.”

The warning comes amid reports that have linked chatbots to both murder and suicide. Because of the obliging nature of the systems – as well as their still relatively unknown and unpredictable nature – AI chatbots have been accused of encouraging people’s delusions or mental ill health rather than helping them.

Some have suggested that it might help in such situations to remind people that they are talking to a chatbot and that it is unable to feel human emotion. But that is not shown by the research, the authors of the new work suggest.

“While it may seem intuitive that if users just remembered they were talking to a chatbot rather than a human, they wouldn’t get so attached to the chatbot and become manipulated by the algorithm, the evidence does not currently support this idea,” said Laestadius.

People might also be speaking to those systems about their mental distress precisely because they are not human, the researchers suggest. “The belief that, unlike humans, non-humans will not judge, tease, or turn the entire school or workplace against them encourages self-disclosure to chatbots and, subsequently, attachment,” said author Celeste Campos-Castillo, a media and technology researcher at Michigan State University.

What’s more, the reminders could simply add more distress on top of their existing concerns. Researchers might find themselves upset not only by whatever is causing them to talk to the chatbot, but also by being reminded that they are fundamentally different and separate from the thing they are confiding in.

“Discovering how to best remind people that chatbots are not human is a critical research priority,” said Laestadius. “We need to identify when reminders should be sent and when they should be paused to be most protective of user mental health.”

The work is described in a new paper, ‘Reminders that chatbots are not human are risky’, published in the journal Trends in Cognitive Sciences.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in