Therapists Must Be Superhuman When Competing With AI Giving Out Free Mental Health Advice

In today’s column, I examine the growing concern that human therapists are increasingly being compared to everyday generative AI, the likes of which readily dispense mental health advice. You can log into just about any major generative AI or large language model (LLM), such as OpenAI ChatGPT, Anthropic Claude, Google Gemini, and Meta Llama, and ask for mental health guidance freely and all day long. People are getting fully accustomed to doing so.

When those same people then interact with a human therapist, the expectation of what the mental health specialist can do for them is exceedingly heightened. In a sense, the public is gradually shifting toward an untenable expectation that human therapists have to be superhuman at diagnosing and providing mental health guidance. Human therapists are going to be held to a high bar that far exceeds any reasonable semblance of customary mental health practice.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here.

Using AI For Mental Health Advice

People around the globe are routinely using generative AI to advise them about their mental health conditions. It’s one of those proverbial good news and bad news situations. We are in a murky worldwide experiment with unknown results. If the AI is doing good work and giving out proper advice, great, the world will be better off. On the other hand, if AI is giving out lousy advice, the mental health status of the world could be worsened.

For more on the population-level impacts, see my comments at the link here.

The beauty of leaning into generative AI and LLMs for your mental health needs is that the AI is pretty much free to use, and available nearly anywhere and at any time. Unlike using a human therapist, you don’t need to schedule visits, you don’t need to seek interaction only during business hours, and you can interact for as long as you like without the cha-ching of costly fees mounting as you do so.

Using generative AI for mental health guidance is quite alluring and a rapidly rising phenomenon.

Raising The Bar Big-Time

There is a somewhat unintended adverse consequence that is inextricably emerging because of this trend.

People are consciously or at times subconsciously comparing the capability of human therapists to the mental health guidance they are getting from AI. The comparison occurs across numerous dimensions.

One dimension of comparison is that AI usually immediately leaps to a conclusion about what your mental health condition is. No lengthy interaction is needed. You don’t need to talk about your childhood or just about any other historical facts of your life. With a few shallow back-and-forth questions and answers, voila, the AI tells you what is going on and advises what to do about it.

This kind of bare-bones fast-food delivery of mental health advice then becomes a new norm.

People expect that human therapists ought to do the same. Forget about a series of one-on-one sessions over many weeks and months. The therapist should size you up in minutes and get right to the point.

Why waste tons of time?

AI works quickly and efficiently. So should human therapists. For more details on this rising expectancy and other facets, see my discussion at the link here.

The Superhuman Therapist

It is possible to suggest that the expectations of what a human therapist can accomplish are ratcheting upward as a result of the widespread use of AI. You might even suggest that public perception is that therapists need to be superhuman. Therapists are supposed to magically divine mental health conditions with the wave of a magic wand. No delays. No tedious and difficult interplay. Drive straight to the heart of the matter in a flash.

A somewhat similar aspect has been noted in the realm of medical doctors. A research study entitled “Calibrating AI Reliance — A Physician’s Superhuman Dilemma” by Shefali V. Patil, Christopher G. Myers, and Yemeng Lu-Myers, JAMA Health Forum, March 21, 2025, made these key points (excerpts):

  • “AI generates recommendations by identifying statistical correlations and patterns from large datasets, whereas physicians rely on deductive reasoning, experience, and intuition, often prioritizing narrative coherence and patient-specific contexts that may evolve over time.”
  • “Superhuman expectations placed on physicians to calibrate confidence in AI systems pose significant risks for both medical errors and physician well-being.”
  • “Research on unrealistic expectations in other professions, such as law enforcement, reveals that employees under such pressures often hesitate to act, fearing unintended consequences and criticism.”
  • “Beyond errors, the strain of coping with unrealistic expectations can lead to disengagement.”

Note that the strain on physicians and other professionals who get into the superhuman category can produce dour outcomes on the backs of those service providers. They might start to undercut their professional norms, trying to be speedier than they know is practical and prudent.

When you are faced with clients and patients who have out-of-whack demands, it is a slippery slope toward the “customer is always right” adage. On the one hand, a professional feels a strident desire to abide by their oath and professional ethics, but at the same time, the pressure to obtain and serve clients causes immense pressure to shapeshift.

Clients Quoting AI Advice

Another facet that is making life tough for human therapists is that people often cite during therapy the advice given to them via AI. It used to be that people would mention what a family member or friend had told them about their mental health aspects. A therapist could readily explain that those well-meaning tidbits are not based on appropriate mental health expertise and standards.

The trick now is that the client will tend to utterly believe what the AI told them. They will insist that AI must know what it is doing. No longer can the waving away of such third-party commentary be so easily undertaken.

Therapists are forced to explain that AI has weaknesses and limitations (assuming they know about AI, which I believe they should, see my remarks at the link here). Some clients will listen to this and assume the therapist is right about the woes of AI. Others will take the claims as a defensive posture by the therapist, desperately trying to deny the incredible capabilities that AI imbues.

Sadly, therapy sessions can devolve into consuming debates about AI, rather than focusing on the actual needs of the client. The beguiling trouble is that this can happen in successive sessions. Each session manages to get bogged down on a point-counterpoint of what AI said versus what the human therapist is saying.

Not good.

Fighting To Stay Human

Therapists can tackle these disconcerting superhuman designations in several sensible ways.

Let’s focus on three crucial steps.

First, some therapists smartly opt to incorporate AI directly into their practices. They do so by providing their clients with specialized access to generative AI that has been set up to aid the work of the therapist. Why is this a leg up on the issue? Because the therapist is taking the bull by the horns and overtly using AI, rather than having the client rummage around in off-the-shelf generative AI. As they say, if you can’t beat them, join them.

I’ve predicted that we are heading away from the staid therapist-client dyad and instead moving toward a new triad, the therapist-AI-client combo. Therapists that do not leverage AI will indubitably be overtaken by therapists that do so, see my discussion at the link here.

Second, therapists need to be fully prepared for the AI-versus-therapist discord. Rather than getting caught off guard, modern era therapists already realize that their clients are likely to make use of AI. The therapist will be ready to openly and seamlessly discuss the tradeoffs about using AI for mental health purposes. No need to get into a defensive posture. Be polite and respectful, meanwhile steering the therapy back to the right matter at hand.

Third, contemporary therapists use AI so that they are familiar with what AI can and cannot do. It is a first-hand, hands-on experience. If a therapist is merely conceptualizing what AI does, their attempt to downplay the AI will appear to be hollow and meritless. It is better to be able to give illustrative examples of how AI falters at mental health advisement. Those examples need to have been encountered by the therapist during their explorations of AI; thus, they are speaking from the heart and from their mind.

Playing The Empathy Card

One angle that some therapists might try to use when confronting superhuman labeling is that they are indeed quasi-superhuman in the sense that they are human and, ergo, are adroitly empathetic. The argument is that AI is not empathetic and won’t be conducive to performing therapy accordingly. It is merely a soulless machine. Anyone who stays with AI or believes in AI is presumably losing out on the empathetic element of human-to-human communication and collaboration.

That certainly sounds convincing, but it isn’t quite as strong a case as might be assumed.

Studies have consistently demonstrated that people can perceive AI to be empathetic, see my analysis at the link here. The AI is pattern-matching on the ways that humans convey empathy to fellow humans. By mimicking this style, the AI can convincingly seem to be empathetic. Those who try to insist that AI isn’t truly empathetic are often facing a losing battle, namely that people are less concerned about garnering the soul-based version of empathy when they otherwise freely get the aura of simulated empathy from AI.

No Heads In The Sand

The bottom line is that therapy is being wholly disrupted and transformed. The days of a therapist and a client sitting face-to-face and blocking out the rest of the world are being usurped. AI is doing the usurping. One way or another, AI is going to be in the hands of the client or the therapist, or in both hands.

Therapists who hide from the AI train ride that is steaming forward are unlikely to prevail.

Sigmund Freud famously said this about embracing change: “We are so made that we can only derive intense enjoyment from a contrast and only very little from a state of things.” The changes afoot due to the widespread adoption of AI are real and happening with lightning speed. Each therapist needs to make a bellwether decision. Are they steering away from AI, or are they going to dig in deeply and figure out the place for AI in their world of therapy?

It is a weighty, self-reflective decision that needs to be made judiciously.

Source: https://www.forbes.com/sites/lanceeliot/2025/10/13/therapists-must-be-superhuman-when-competing-with-ai-giving-out-free-mental-health-advice/