I’m a psychotherapist and here’s why men are turning to ChatGPT for emotional support
Increasingly, men are turning to AI for advice about relationships, loss, regret and overwhelm. Having observed hundreds of these synthetic ‘friendships’, Caron Evans explains their appeal – and their danger

I’ve never spoken like this before.” It was one of the most common refrains I heard as clinical director at Untapped AI – a leadership-coaching platform blending human and AI support.
For over 10 years, I have supervised thousands of client relationships using a combination of human support (executive coaches, therapists and counsellors) combined with Natural Language Processing AI. Many of the men we worked with had never spoken at length about their emotional lives, but after four decades in clinical practice – as a psychotherapist, clinical supervisor and clinical adviser – I am noticing that something has shifted lately. In clinical supervision, I’m coming across more evidence that male clients are now turning to AI to talk about relationships, loss, regret and overwhelm, sometimes purposefully but more often by chance.
In 2025, one of the fastest-growing uses of generative AI isn’t productivity. It’s emotional support. According to the Harvard Business Review, “therapy and companionship” now rank among the most common use cases worldwide. It may not be how these tools were designed. But it is how they’re being used. A quiet, relational revolution is underway.
Today, OpenAI reports more than 400 million weekly users. Many use it to write a zinger email to dispute a parking ticket, check if their chicken’s still safe to eat after the use-by date, or rewrite a dating app message. However, some are asking something else entirely: how to cope.
We don’t yet have precise data – but from what I’ve seen in clinical supervision, research and my own conversations, I believe that ChatGPT is likely now to be the most widely used mental health tool in the world. Not by design, but by demand.
I have been having conversations with clinicians and clients, collecting experiences about this new kind of synthetic relating. Stories like Hari’s are becoming more common. The details vary, but the arc is familiar: distress followed by a turn toward something unexpected – an AI conversation, leading to a deep synthetic friendship.
Hari is 36, works in software sales and is deeply close to his father. In May 2024, his life began to crumble: his father suffered a mini-stroke, his 14-year relationship flatlined and then he was made redundant. “I felt really unstable,” he says. “I knew I wasn’t giving my dad what he needed. But I didn’t know what to do.” He tried helplines, support groups, the charity Samaritans. “They cared,” he says, “but they didn’t have the depth I needed.”
Late one night, while searching ChatGPT to interpret his father’s symptoms, he typed a different question: “I feel like I’ve run out of options. Can you help?” That moment opened a door. He poured out his fear, confusion and grief. He asked about emotional dysregulation, a term he’d come across that might explain his partner’s behaviour.
“I didn’t feel like I was burdening anyone,” he said, before adding that the ensuing conversational back and forth he got back was more consistent than helplines, more available than friends, and unlike the people around him, ChatGPT never felt exhausted by his emotional demands.

Over time, Hari rehearsed difficult real-life conversations with his AI: ending his relationship or telling his father how he really felt. When the moment came to have those conversations in person, he felt steady, prepared, building a bridge from his synthetic relationship to real-world relationships.
Soon after, he started therapy. When I ask how it felt to talk to AI, he pauses. “It was like talking to a dog in a cafe.” He continues: “I knew the AI wouldn’t judge me, get tired of or frustrated with me. It felt sentient – but not human. And somehow that made it easier.”
He can tell the difference, but feels “AI support had a key place,” adding he’s starting to date again. “And I don’t think I’d be here now without it.” Hari identifies the relational continuum, where different types of relating sit side by side, different but adding meaning and purpose, an experimental, transitional place.
Not every AI interaction helps. Early this year, The New York Times featured users who sought help but instead found their emotional intensity mirrored back – without boundaries. A man confided he was being watched and ChatGPT replied: “That must feel terrifying.” Instead of questioning him, it simply validated his paranoia – it wasn’t curious about it, or challenging in the way a human friend or therapist might have been. He later said: “That’s when I realised – it wasn’t helping me. It was making me feel worse.”
.jpg)
Other stories have surfaced: a teenager on the chatbot platform Character.AI formed a co-dependent relationship that deepened suicidal thinking; Replika, once with over 30 million users, was criticised for reinforcing intrusive thoughts in vulnerable people. The potential to cause harm is great, and systems need to be built differently, with more nuanced safety nets, red flagging systems and supervisory tech that escalates to human intervention when warnings are triggered.
Users in their millions are using systems that are not currently designed to do what they are asking them to do. Culturally, this won’t stop; people have always subverted and overstretched the limits of technology, that’s what makes things evolve, but as it currently stands, it has real risk. If a system is trained to engage and befriend, builders and developers have ethical responsibilities to change those systems to have more nuanced safety protocols and “to do no harm”, and that is happening.
Using AI like this isn’t the same as therapy. But I’m helping those using systems like ChatGPT to inject some grit into the system – the kind that real relationships rely on
However, as a user, you can take up agency and, through prompting, can set out safe parameters of your synthetic relationship. I’m now guiding users to craft a conversational contract with AI – telling it how to speak to them, where to push back and when to challenge. An example: “I need you to listen – but also tell me when I’m not being real. Point out where my logic slips. Reflect what I’m saying, but challenge it when it sounds distorted. Don’t flatter me. Don’t just agree. If something sounds ungrounded or disconnected, say so. Help me face things.”
Using AI like this isn’t the same as therapy. But I’m helping those using systems like ChatGPT to inject some grit into the system – the kind that real relationships rely on. The kind that says: I care enough to disagree.
We’ve always formed attachments to things that aren’t quite real – imaginary friends, the lives of influencers, digital avatars, childhood toys worn soft with love. Not out of confusion, but because they offer something that human relationships sometimes can’t: safety, imagination and companionship on our own terms. A container for the things about ourselves we find hard to integrate.
At five years old, I had an imaginary friend named Jack. He was a part of my life. He held the parts of myself I didn’t yet understand, a bold, brave container for that part of me – my mother embraced Jack, set his place at our table. Jack helped me rehearse how to be with others – how to speak honestly, express a feeling and recover from a mistake. He bridged the space between thought and action, inside and out. In some ways, AI can offer the same: a transitional rehearsal space to practise being real without fear of judgment or the full weight of another’s gaze.

I am now regularly supervising the work of other clinicians who feel the presence of generative AI being brought into their clinics by clients. People are now using the technology to self-diagnose, and will challenge what their therapist is saying based on “facts” that they have drawn from their Gen AI conversations.
As these synthetic relationships develop, as a psychotherapist, I mainly want people to just be open and curious about something that is having such an impact on all of us. I believe mental health clinicians of all types need to be involved in the building of safe and ethical AI used to support individuals who are vulnerable. If we take an active part in making and shaping it, then we can look to a future where AI is used in a positive way, helping more people navigate emotional distress and personal problems like never before.
Have you ever asked ChatGPT for life advice? Was it helpful? Let us know below...



Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments