People are increasingly trying to reconcile what AI has told them about their mental health versus what their human therapist tells them.
getty
In today’s column, I examine the rising trend of therapists getting confronted by erstwhile clients who walk in the door with mental health solutions that have been generated via AI. It’s an easy task for the client to undertake. All they need to do is ask generative AI a simple question or two and then get instant answers that supposedly will aid or cure their expressed mental health qualms. They then eagerly present the AI-produced responses to their therapist and expect the therapist to seemingly blindly abide by the AI’s indications.
It’s an increasingly prolonged tussle in which the therapist must contend with not just the needs of their client but also interpret, reinterpret, and potentially refute what the so-called authoritative AI has stated. Therapists are being bogged down by AI quick-fix responses that inaptly heighten client expectations.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here.
People Are Using AI For Mental Health
It seems like nearly everyone is using AI for all kinds of help these days.
You probably know that one of the most prominent and dominant uses of generative AI and large language models (LLMs) is to seek mental health advice from AI. For my in-depth analysis of surveys showcasing this trend, see the link here. ChatGPT alone has around 400 million weekly active users, and indubitably, some proportion are avidly using the immensely popular AI for mental health guidance, see my estimates at the link here.
This makes abundant sense.
Why so?
AI is available anytime and anywhere. You can log in and start asking mental health questions without having to reserve a time slot or make any kind of logistical arrangements. The AI will readily converse with you about your mental health concerns for as long as you wish. Usually, the AI usage is free or accessible at an extremely low cost.
People around the globe are routinely using generative AI to advise them about their mental health conditions. It’s one of those proverbial good news and bad news situations. We are in a murky worldwide experiment with unknown results. If the AI is doing good work and giving out proper advice, great, the world will be better off. On the other hand, if AI is giving out lousy advice, the mental health status of the world could be worsened.
For more on the population-level impacts, see my analysis at the link here.
People Turn To AI Opinions
An intriguing and perhaps disturbing trend is starting to appear.
Before going to see a therapist, some will decide to figure out their potential issues via conferring with AI. They might do so briskly. Or they might spend gobs of hours and do so over numerous weeks of discourse with the AI.
Even people already seeing a therapist are bound to undertake the same kind of inquiries with AI. They arm themselves with AI-generated mental health indications and take those with them when they visit their human therapist. The added twist is that they get the AI to comment on what their therapist has already done or told them to do. Not only is the AI providing advice, but it can also be quite a loud-mouthed critic of what the human therapist has clinically performed.
So, we have two classifications of this happening:
- (1) Prospective clients consult AI before seeing a therapist.
- (2) Existing clients consult AI while under the care of a therapist.
It used to be that people would mention what a family member or friend had told them about their mental health aspects. A seasoned therapist could readily explain that those well-meaning tidbits are not based on appropriate mental health expertise and standards. The trick now is that the client will tend to utterly believe what the AI told them. They will insist that AI must know what it is doing. No longer can the waving away of such third-party commentary by a therapist be so easily undertaken.
AI or the aura of AI is in the session room, whether therapists welcome it or not.
Medical Doctors In The Same Boat
There is an old saying that misery loves company.
If so, the presumably buoyant news on this heady matter is that medical doctors are facing the same conundrum with their patients. A recent piece posted by the Journal of the American Medical Association (JAMA) that was entitled “When Patients Arrive With Answers” by Kumara Raja Sundar, JAMA Network AI In Medicine, July 24, 2025, made these salient points (excerpts):
- “Patients arriving with researched information is not new. They have long brought newspaper clippings, internet search results, or notes from conversations with family.”
- “Increasingly, patients are bringing AI-generated insights into my clinic and are sometimes confident enough to challenge my assessment and plan.”
- “Generative artificial intelligence (AI), with tools like ChatGPT, offers information in ways that feel uniquely conversational and tailored. Their tone invites dialogue. Their confidence implies competence.”
- “I find myself explaining concepts like overdiagnosis, false-positives, or other risks of unnecessary testing. At best, the patient understands the ideas, which may not resonate when one is the person experiencing symptoms. At worst, I sound dismissive.”
- “If patients are arming themselves with information to be heard, our task as clinicians is to meet them with recognition, not resistance. In doing so, we preserve what has always made medicine human: the willingness to share meaning, uncertainty, and hope, together.”
I’m sure you can see that those points echo a somewhat similar consideration in the realm of therapists and providing mental health care.
Engaging In Therapy
Let’s unpack some of the pros and cons of this overall trend. I’ll start on the optimistic side and then walk you through the gotchas and sour side.
You could take a positive perspective and herald that people are potentially becoming more actively engaged in their therapy. I say this because sometimes people come to therapy and don’t seem especially devoted to the effort. If they have gone to the trouble of first accessing AI and conferring with AI on their status, it could be an indicator that they are quite serious about the therapeutic process.
Another plus is that the AI might have opened their eyes to various ways to understand mental health and, particularly so, concerning their own situation. The AI has helped make their mind more pliable. When the AI told them this or that, it could have kicked-started the wheels in their noggin and become of immense help for the therapist. Establishing an open mind is often the biggest stumbling block on the path to conductive therapy.
If the AI has provided sensible insights, a therapist can quickly lean into those elements and proceed accordingly. The AI has set the stage. The therapist can launch from that starting point.
And, in that case, you could pleasantly argue that some of the aura of the wonderment of AI might spill over to the therapist. In essence, when the AI says one thing, and the therapist says the same thing, the client might feel proud that their selected therapist “knows as much as AI does” and therefore is a reliable and well-versed therapist.
All in all, this AI usage then seems to be nearly a godsend.
Not All Is Cheery And Bright
I promised you that I would also cover the downsides of AI being used in this manner. Get yourself ready for some ugliness. Sorry, it is a necessity to be considered.
First, a client might become utterly anchored to whatever the AI has said. No matter how hard the human therapist tries to explain why the AI has gone astray, some clients will stubbornly cling to the AI. The AI is the AI. AI is never wrong. Ergo, the therapist must be wrong if they disagree with AI. Period, end of story.
Second, a client or even a prospective client will likely ask why in the world they should pay to see a human therapist if the AI is otherwise telling them the same thing that the therapist is saying. Unless the therapist can demonstrate added value, the AI is going to be the winner-winner chicken dinner. Avoid the costly fees of a human therapist and relish the free use of AI to delve into your mental health needs.
Third, a therapist can appear to be unduly defensive when refuting the wisdom of AI. A client might not be able to discern whether the AI is right or the therapist is right. All they might sense is that the therapist seems to be rejecting the AI. Why are they rejecting the AI? Maybe the therapist is desperately aiming to keep the flow of fees coming in the door. Naturally, the therapist would fight tooth and nail to make AI seem like a loser and should be abandoned.
Sadly, and the fourth of my listed adverse aspects, it turns out the therapy sessions can devolve into all-consuming, convoluted debates about AI, rather than focusing on the actual needs of the client. The beguiling trouble is that this can happen in successive sessions. Each session manages to get bogged down on a point-counterpoint of what AI said versus what the human therapist is saying.
Not good for the client and not beneficial to the therapy.
What Therapists Should Not Do
Some narrow-minded therapists are likely to tell their clients that they should not be using AI for mental health guidance, and that if a client does so, the therapist refuses to discuss it. Anything that AI has said is off-limits and off the table.
Just put the AI aside and keep it entirely out of the session roo
I dare say that this type of radioactive attitude about AI is probably going to be the business death knell for such therapists. You can bet that people are going to make use of AI for mental health inquiries, regardless of whether their therapist deems it worthy or not. The odds are that people will merely hide their AI usage. Instead of saying that the AI said this or that, the person will tell the therapist that they came up with an idea of their own volition and want to discuss it with the therapist.
Worse still, the client will resort to more egregious lying. The client will make up a story that supposedly a friend or stranger told them about some mental health aspects. The reality is that it was the AI. But, since the therapist has summarily banned AI, the next best thing is to lie and claim that a third party told them whatever the AI actually stated.
The bottom line is that a head-in-the-sand approach by therapists is a losing gambit. Clients will get themselves into an awkward bind, pushed there by the therapist. The therapist will perhaps believe they’ve squashed the whole AI fandom, but it has instead gone underground.
Gradually, word will spread that the therapist is anti-AI. Many modern-era prospective clients won’t want to see such a therapist. Existing clients will potentially and begrudgingly accept the premise, though, as noted above, they will find ways to circumvent it.
What Therapists Should Do
Without seemingly being overly smarmy, the first step for contemporary therapists is to recognize the problem at hand. I’m sure they’ve given similar sage advice from time to time. Define the problem, then work toward a solution. An ill-defined or undefined problem means that finding solutions is rudderless.
The pressing issue is that people are using generative AI and LLMs to get mental health advice. Do not tilt at windmills. AI is here. AI is going to get more entrenched.
Accept your fate.
Next, therapists need to be familiar with AI and be ready to speak about AI from the heart.
You see, talking about AI conceptually is insufficient. An astute therapist will use AI so that they know what it is and what it doesn’t accomplish. They will know this first-hand. If a therapist only relies on what they’ve been told or somehow read here or there, that will likely be unconvincing to their clients. Authenticity goes a long way in the human-to-human connection of the therapist-client relationship.
The biggest step for therapists, and one that few have yet to master, entails recognizing that the traditional therapist-client dyad is morphing into a therapist-AI-client triad, see my in-depth discussion at the link here. It goes like this. Rather than having clients wander aimlessly among the use of AI, the therapist directly incorporates AI usage into the therapeutic process.
Take the bull by the horns. Make AI an element of your practice. Do so intelligently. If you add AI and don’t do so adroitly, you are bound to make things worse. Dovetail AI cautiously and with a smartness that gleans the upsides of AI usage, with the vital benefits of having a human therapist conducting therapy.
Get Onboard Soon
The gist is that if the AI is used by a therapist in a sensible and service boosting manner, that’s a crucial talking point to bring up with clients. A client might then perceive the AI usage in quite a favorable light. The therapist is seeking to provide the best feasible therapy and dip into AI to try and ensure that such a goal is achieved.
As I’ve repeatedly stated, a therapist ably armed with AI can indubitably outdo therapists who shun or hide from contemporary use of AI. My prediction is that we will transform step-by-step away from the therapist-client relationship and move rapidly into the world of the therapist-AI-client triad.
A final thought for now.
Albert Einstein artfully stated the importance of acknowledging and coping with change: “The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.” This applies especially to those therapists who are currently opposed to AI usage for mental health or who are dragging their feet about it. The world is changing. Fast.
Thinking needs to change accordingly.