People are telling ChatGPT about their most intimate problems – but AI is not our friend
Hundreds of thousands of people are showing ‘possible signs of mental health emergencies’ in their chats with ChatGPT, and millions more are probably oversharing. But there is something fundamental about talking to a human that cannot be replaced, writes Andrew Griffin


If a lion could speak, we couldn’t understand him," wrote Ludwig Wittgenstein in the Philosophical Investigations, published in 1953. He might just as well have said that if a large language model could speak, we couldn't understand it. And he would be right: we don’t. And yet we persist in having those conversations.
It is, fittingly, not entirely clear what Wittgenstein was getting at either. It is a kind of provocation, sandwiched between other provocations. But I at least have always understood it to be a reflection of the fact that language relies on a whole web of experiences that we must understand if we are to understand the language. A couple of paragraphs before the lion, Wittgenstein notes that people can be enigmatic to us, even if we know the words they are saying.
“We learn this when we come into a strange country with entirely strange traditions; and, what is more, even given a mastery of the country’s language,” he writes. “We do not understand the people. (And not because of not knowing what they are saying to themselves.) We cannot find our feet with them.”

This is something of a problem for large language models like ChatGPT. They have mastered our country’s language, but they are inherently ignorant of our strange traditions. They are necessarily unable to understand us, to find their feet; they don’t have feet, they don’t even really know what a foot is, apart from it being a word that tends to come alongside other ones.
And still people keep trying to go on deep meaningful walks with ChatGPT. The full scale of that was revealed this week, when its creators OpenAI released research that said that 0.07 per cent of its users in a given week show “possible signs of mental health emergencies related to psychosis or mania”. That percentage might sound small, but the numbers are huge, since OpenAI says 800 million people use it each week, so it is presumably in the hundreds of thousands. What’s more, it does not account for the people whose discussions of their mental health might not be acutely concerning enough to trigger OpenAI’s automatic systems for spotting distress, but are substantial nonetheless.
This is a problem in part because ChatGPT is a linguistically gifted but fundamentally oblivious sycophant; if it were a person, it would be all IQ and no EQ, which is to say the very worst person to talk to about your emotional wellbeing, but one that it is tempting to share your problems with. ChatGPT will say all the right things, but not mean any of them. It will give you the sense of having shared a problem, but ultimately sharing is a reciprocal and active process on both parts, which requires someone to actually accept as well as give.
Many of us know this, intellectually; it’s obvious when you think about it that there’s no other person on the other side of the chat. But it really does do a fantastic impression of there being someone on the other side. And in many ways the impression is better than sharing your problems with a real person: it’s not a gossip (though your conversations might still be used against you), its attention is always on you, its responses are never especially difficult or challenging. But it is also not real.
It is easy to forget because, for almost the entirety of history, if words had sense then they also had someone sending them to you; meaning needed a meaner. The breakthrough of recent decades – and the last few years especially – has been the production of language without anyone to actually say it. We might know intellectually that these are just “words, words, words”, but it doesn’t feel that way.

It might partly be a result of the fact that the internet has already worn away much of the connection between words and the people forming them. The anonymous internet has brought us many amazing things, but it also weakened the idea that every idea needs someone to say it, an idea that couldn’t survive pseudonymous ragebait and statements hidden behind self-reference and irony of the kind found across social media. ChatGPT and similar apps appear in the same form as conversations with customer service agents, dating app matches and other interlocutors that might never actually really exist at all.
This isn’t an entirely new problem; people have been just saying stuff probably as long as there was stuff to say. When someone playing Hamlet reads that famous “words, words, words” line, we are at once convinced and critical; as computing pioneer Alan Kay said in a recent talk at the Royal Society, the theatre has always relied on the fact that the quick part of our brain understands drama as real, while the slower part understands it is as an artistic exercise. In fact, that dual recognition is part of the thrill of the theatre.
But at the theatre we know who is talking. We have a whole intellectual framework that allows us to understand that those words are Hamlet’s, and they are also Shakespeare’s, and they are the actor’s and the director’s too, and they are also none of those people’s because in another important sense they are not actually saying anything about the world beyond the play. We could do with a similarly nuanced intellectual framework around the game of speaking to ChatGPT, but we don’t have it yet, and our fascination with the technology is speeding along much more quickly than any attempt to reckon with that.

Helping someone else carry an emotional burden is among the most human things we can do together. “If after I am free a friend of mine gave a feast, and did not invite me to it, I should not mind a bit. I can be perfectly happy by myself,” Oscar Wilde writes in De Profundis, in a passage worth quoting at length. “With freedom, flowers, books, and the moon, who could not be perfectly happy? Besides, feasts are not for me any more. I have given too many to care about them. That side of life is over for me, very fortunately, I dare say.
“But if after I am free a friend of mine had a sorrow and refused to allow me to share it, I should feel it most bitterly. If he shut the doors of the house of mourning against me, I would come back again and again and beg to be admitted, so that I might share in what I was entitled to share in. If he thought me unworthy, unfit to weep with him, I should feel it as the most poignant humiliation, as the most terrible mode in which disgrace could be inflicted on me.
“But that could not be. I have a right to share in sorrow, and he who can look at the loveliness of the world and share its sorrow, and realise something of the wonder of both, is in immediate contact with divine things, and has got as near to God’s secret as any one can get."
ChatGPT cannot weep with us. It may bang against the doors of mourning and beg to be admitted, confident of being entitled to share in it, but only because mental and emotional suffering of this kind is good for keeping users engaged, which in turn is good for business. It cannot share our sorrows; it is as far from God’s secret as anyone can get.



Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments