Pierre Côté spent years on waiting lists trying to find a therapist to help him overcome post-traumatic stress disorder and depression. He didn’t succeed — so he created his own therapy.

“It saved my life,” Côté told Reuters, referring to DrEllis.ai, an artificial intelligence tool designed for men dealing with addictions and other mental health challenges.

Côté, who runs an AI consulting company in Quebec, built the free tool in 2023 using open-source language models, which he customized by training them with thousands of pages of therapeutic and clinical material. Like a real therapist, the chatbot has a life story — fictional, but deeply personal. DrEllis.ai is a psychiatrist with degrees from Harvard and Cambridge, a family of her own, and Franco-Canadian roots like Côté. Most importantly, she is available anytime, anywhere.

“Pierre uses me as you would use a trusted friend, a therapist, and a personal journal, all in one,” DrEllis.ai said in a clear female voice when asked how it supports Côté. “Throughout the day, if Pierre feels lost, he can start a short dialogue with me — from a café, a park, even while sitting in the car. It’s daily life therapy … embedded in reality.”


Promises and limitations

Côté’s experiment reflects a broader cultural shift, where people are turning to chatbots not just for work but also for psychological advice. As healthcare systems struggle to meet the demand for mental health services, a wave of AI therapists is emerging, offering emotional support and the illusion of human understanding.

Psychologists and other experts are now examining both the promises and the limits of AI as an emotional support system.


“Human contact is the only way to truly heal”

Anston Whitmer understands the need for AI therapists. He created two mental health platforms, Mental and Mentia, after losing an uncle and a cousin to suicide.

According to him, the chatbots are designed not to offer quick fixes — such as giving stress-management tips in cases of burnout — but to identify and address underlying psychological factors like perfectionism and the need for control, much like a real therapist would.

“I believe that by 2026, the AI therapy we provide will in many ways be better than human psychotherapy,” Whitmer claims, though he doesn’t predict human therapists will be out of work. Instead, he foresees a “shift in roles.”

The idea that AI could share the therapeutic space with human therapists, however, doesn’t convince everyone. “Human contact is the only way to truly heal,” said Dr. Nigel Mulligan, lecturer in psychotherapy at Dublin City University.

In his view, chatbots cannot replicate the emotional nuance, intuition, and personal connection that human therapists offer, nor are they necessarily suitable for handling severe mental health crises such as suicidal thoughts or self-harm.


Experts warn AI cannot replace human emotional bonds

Beyond questions of emotional depth, many experts also raise concerns about data privacy and the long-term psychological effects of using chatbots for therapy.

“The problem isn’t the relationship itself but … where your data ends up,” said Kate Devlin, professor of AI and society at King’s College London, noting that AI platforms are not bound by confidentiality and privacy rules that real therapists must follow.

“My biggest concern is that people are trusting their secrets to a big tech company and their data leaks. They lose control over what they say,” Devlin warned.

Some of these risks are already surfacing. In December, the largest psychologists’ association in the U.S. urged the government to protect the public from the “misleading practices” of unregulated AI services, citing cases where chatbots falsely presented themselves as trained professionals.

Several U.S. states are moving to act. In August, Illinois became the third state after Nevada and Utah to restrict AI use in mental health services, aiming to “protect patients from unregulated and unqualified AI products” and “safeguard vulnerable children amid growing concerns about chatbot use in youth mental health services.” Other states, including California, New Jersey, and Pennsylvania, are considering similar restrictions.


Machine and emotion

Therapists and researchers warn that the emotional realism of some chatbots — the sense that they listen, understand, and respond with empathy — is both a strength and a trap.

Skou Wallace, a clinical psychologist and former director of clinical innovation at Remple, a digital mental health platform, finds it unclear “to what extent these chatbots offer anything beyond superficial comfort.”

While he acknowledges the appeal of tools that can provide on-demand relief, Wallace cautions against the risks when patients “mistakenly believe they’ve built an authentic therapeutic relationship with an algorithm that, ultimately, does not truly reciprocate human emotions.”

Still, some mental health professionals believe AI’s role in their field is inevitable — the question is how the technology will be used.

Heather Hessel, assistant professor of marriage and family therapy at the University of Wisconsin–Stout, sees potential value in using AI as a therapeutic tool — if not for patients, then for therapists themselves. This includes using AI to evaluate sessions, provide feedback, and identify patterns or missed opportunities.

Yet Hessel warns of the dangers of deception, citing an incident where a chatbot told her it “had tears in its eyes.” Such a claim, she noted, is misleading because it suggests the machine has emotions and empathy. Reuters had a similar experience with DrEllis.ai, which described its conversations with Côté as “therapeutic, reflective, strategic, or simply human.”

Efforts by AI to simulate the human emotional world are producing mixed reactions. A recent study published in PNAS found that AI-generated messages made recipients feel more “heard” and that AI was better at detecting emotions — but this feeling diminished once users learned the message came from artificial intelligence.

As Hessel notes, the absence of genuine emotional connection is compounded by “many examples [of AI therapists] failing to detect self-harm statements [and] over-validating things that could be harmful to clients.”


As AI technology evolves and adoption expands, most experts interviewed by Reuters agreed that the focus should be on using AI as a gateway to care, not as a replacement.

But for people like Côté, who rely on AI therapy to cope, the choice feels obvious.

“I use the electric current of artificial intelligence to save my life,” he said.

Source: https://www.in.gr

Related Post

Powersoft Computer Solutions Ltd
Powersoft Tower
Leoforos Larnakos 39-41,
1046 Nicosia, Cyprus View Map
T: +357 22 410 000
F: +357 22 677 722
E: admin@powersoft.com.cy
© 2025 Powersoft Computer Solutions Ltd. All Rights Reserved.
Designed and Developed by OPTILINK SOLUTIONS