Asking Your Therapist To Confer With The AI Chatbot That’s Giving You Off-The-Cuff Mental Health Advice

In today’s column, I examine an interesting and controversial new twist associated with mental health therapists and their clients. The deal is this. Clients are increasingly making use of generative AI and large language models (LLMs) to obtain mental health guidance. The person then opts to discuss the AI-dispensed advice with their human therapist.

The surprising step now emerging is for the client to ask their therapist to directly use the client-chosen AI chatbot. In other words, rather than merely relaying to the therapist what the AI has been saying, the client asks the therapist to log into the AI and confer directly with the AI.

Is this a good idea or a bad idea, and what should therapists do when asked to proceed in this manner?

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

People Are Using AI For Mental Health Advice

The most popular use nowadays of the major LLMs is for getting mental health guidance, see my discussion at the link here. This occurs easily and can be undertaken quite simply, at a low cost or even for free, anywhere and 24/7. A person merely logs into the AI and engages in a dialogue led by the AI. The use of generic LLMs such as ChatGPT, Claude, Gemini, Llama, Grok, and others is a common example of using AI for mental health advice.

There are sobering worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Huge banner headlines in August of this year accompanied a lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm.

For the details of the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards. Lawsuits aplenty are arising. In addition, new laws about AI in mental healthcare are being enacted (see, for example, my explanation of the Illinois law, at the link here, the Nevada law at the link here, and the Utah law at the link here).

The Therapist-AI-Client Triad

I have repeatedly noted that the classic dyad of therapist-client is gradually transforming into a triad of therapist-AI-client. One way or another, AI is getting intermingled into the sacred therapist-client combo. For more on the new triad, see my discussion at the link here.

The most common way of AI entering the therapist-client relationship is via clients who make use of AI to get everyday mental health guidance, doing so outside the purview of the therapist. The person logs into ChatGPT or a similar major LLM and tells the AI how their day is going. This is followed by a brief chat about any mental aspects the person is worried about.

How does this human-AI chat then get intertwined into the therapist-client relationship?

Easy-peasy, the client comes to see their therapist at a regularly scheduled session and tells the therapist what the AI has been advising over the course of the last week or since the last visit with the therapist. The person might tell their therapist that the AI recommended that meditation should be undertaken. Does the therapist agree with that AI-based recommendation? The client wants to see whether the human therapist is okay with the AI-generated advice or perhaps disagrees with it.

Therapists With Heads In The Sand

Some therapists reject any discussion about AI while the therapeutic session is underway. They insist that AI has no role or significance when it comes to human-to-human therapy. Overall, these doubting therapists eschew AI and often instruct their clients to avoid using AI for any kind of mental health guidance whatsoever.

I believe that those so-inclined therapists are eventually going to find themselves faced with a tough choice. Here’s what it is. If they pretend that AI doesn’t exist or doesn’t matter, clients will likely be using AI behind the back of the therapist. This is bad for the therapist-client relationship. In theory, therapists are supposed to view their clients holistically and thus, in this circumstance, are forcing clients to hide material facets from the therapist.

Also, clients energetically want to discuss AI with their chosen therapist. Therapists who won’t entertain any discussion about AI are going to have clients who defect over to other, more AI-savvy therapists. Furthermore, prospective clients who are seeking a human therapist will also realize that such a non-AI believing therapist is not conducive to the real world, in the sense that the prospective client wants AI to be part of the therapeutic process. For more details about the prudent ways that therapists should be discussing AI with their clients, see my analysis at the link here.

From The Horse’s Mouth

Clients who are apt to discuss AI-generated advice with their therapists are now starting to push this trend a step further. It has to do with wanting the therapist to hear or learn about the advice directly from the horse’s mouth, so to speak.

A client will gingerly ask their therapist whether it might be possible to have the therapist interact directly with the AI that the client has been using. For example, the client might be using GPT-5 regularly and getting AI-generated mental health advice. The client has been relaying the AI advice to the therapist, either by sending emails to the therapist or telling the therapist during face-to-face sessions.

An issue that often arises is that since the therapist is being informed about the AI-generated advice on a second-hand basis, namely via the client telling the therapist, this leaves the therapist in a bit of a lurch. The client might be mistakenly interpreting the AI advice. The client might be intentionally distorting the advice. All in all, the therapist is only guessing whether the client is suitably conveying the AI indications.

Therapists And The AI Absence Spiral

Another issue that corresponds to the facet of getting the AI advice on a second-hand basis is that the therapist has no ready means to question or follow up with the alleged AI-generated advice.

Suppose the AI has told the client to undertake meditation. The client tells the therapist about this recommendation during a session. The therapist then begins to explore the recommendation with the client and asks the client various pointed questions. Why did the AI recommend meditation? What is the basis for the recommendation? What kind of meditation? How often? And so on.

“Don’t know,” says the client.

The client might be clueless about those distinctions. All they know is that the AI recommended meditation. They didn’t pursue the details with the AI. Thus, the therapist is not going to get very far in terms of exploring why or how the AI opted to generate the claimed recommendation.

Imagine what might happen next (here’s the kicker). After the session with the human therapist finishes, the client rushes to log into their AI and start asking the same questions that their therapist mentioned. Those answers are then held tightly by the client until they next see the therapist. Sure enough, the entire next session is consumed with the client relaying what the AI said. But, meanwhile, the therapist has new questions and wants to understand more about what the AI is getting at.

This becomes a rinse-and-repeat kind of cycle, not making much progress in therapy and instead using up time endlessly discussing what the AI said at each iteration.

Therapist And AI Meet With Each Other

To some degree, you might liken this circumstance to having a client relying upon and relaying the advice given to them by a family member or other trusted person.

Therapists deal with that aspect all the time. It is quite common for a client to bring up that this person or that person has told them to do this or that. The therapist must then weigh the nature of the reported indications and figure out what to do about the third-party advice.

Sometimes, a therapist will end up meeting with the third-party as based on the client wishing this to happen and the therapist believing that it will make the therapeutic process more effective accordingly. You can make the case that the AI is in the same boat. The AI has the ear of the client. The therapist might as well directly interact with the AI.

There are other reasons that this might be useful to do.

What if the client is making up the fact that the AI gave this or that advice? You see, the client might be making up the advice on their own and then portraying the advice as AI-generated so that the therapist will give the advice more credence. If the client is willing to have the therapist dip into the AI, this would be a potential eye-opener for the therapist about their client and the mental considerations involved.

Giving The Therapist Access To The AI

One means to allow the therapist to access the AI would be to have the client hand over their login information. The therapist could then access the account. They can readily see the chats that the client has been having.

How far should a therapist proceed?

The simplest path would be to inspect the conversations that the client has undertaken with the AI. What did the AI say? What did the client say? How are they interacting with each other? What does this signify about the client? What does this signify about the AI and the advice that the AI is dispensing to the client?

A more complicated path would be for the therapist to actively engage the AI in dialogue. The therapist might ask the AI questions regarding the client. How long has the AI been providing advice to the person? What is the basis for the recommendations, such as the suggestion that the client undertake meditation?

The Thorny Brush Of Entanglement

There is a bit of a problem with this interaction between the AI and the therapist. Actually, it is a rather thorny problem.

The therapist is now going to be on record as using the AI. Does the usage imply that the therapist is endorsing the use of the AI for mental health purposes? Is the therapist now responsible for the client becoming further reliant on the AI? Etc.

Many additional twists ensue. The therapist might inspect chats that have nothing to do with any mental health particulars. Is that an intrusion of the client? Well, it depends, you might exhort. If the client has already granted permission to access the AI, everything is fair game. Really? What if the therapist learns about an upcoming company merger by looking at the chats? Does that constitute a so-called fair game?

Consider too the privacy issues.

There are codes of conduct and legal aspects associated with the therapist-client confidentiality (see my coverage at the link here). If the therapist is entering client-related aspects into the AI, the therapist is potentially breaking the revered therapist-client confidentiality. How so? Keep in mind that the AI makers indicate in their licensing agreements that the use of their AI is not bound by HIPAA or other privacy enactments. The AI makers usually reserve the right to inspect whatever a user enters into the AI. On top of this, they normally state in the licensing that they can use any entered prompts to further data-train the AI.

You can plainly discern that a lot of weighty questions arise.

Other Nuances To Consider

Besides a client handing their login to the therapist, there are other ways to bring forth the AI access. For example, some LLMs are now allowing a second user to have allied access to an AI account. The primary purpose involves a parent or adult who is serving as an overseer of an AI account used by a minor or child. This type of mechanism is likely to be further expanded and provide an alternative to outright sharing of a single account.

Any therapist contemplating accessing the AI of their client should confer with the legal counsel associated with their therapy practice. It will be important to ensure that the client explicitly acknowledges the allowed usage and provides appropriate waivers. There is also the issue that AI makers typically stipulate that an account is to be used by one user only; ergo, the therapist might be violating the terms of the AI maker.

Additional legal and policy intricacies are involved.

Therapists often create their own notes about the discussions they have had with their clients. How might this be done if the therapist is carrying on chats with the AI? You might contend that the therapist can merely log in to see those chats, but if the account is under the auspices of the client, the client could deny further access. It’s a dilemma.

A few wild variations of what can happen are worth mulling over. Imagine that a therapist becomes preoccupied with using the AI that a client has granted access to. The therapist could lose sight of the reason they are using the AI. The focus should be on client-specific matters. Varying beyond that is worrisome. Plus, again, the AI will be a digital record of what the therapist did while using the AI.

Bottom-Line And Heads-Up

There are clearly risks associated with accessing the AI of a client. Let’s also consider some of the benefits.

A therapist could discern the degree of reliance or dependency that the client has on the AI that is being utilized. The therapist might have a stronger understanding of the mindset of the client by inspecting the AI dialogues. Handily, the therapist would potentially catch any disturbing element that perhaps the client has fallen into a mental trap of delusional thinking, as sparked by the AI (see my examples at the link here).

On the balance of the ins and outs, I would tend to discourage therapists from going this route. There is a better path that can be pursued. Allow me to elaborate.

The prudent path is to have the therapist set up AI and have their clients make use of that AI. In my view, that’s the proper means of implementing the therapist-AI-client triad. The AI is brought to the table by the therapist. The client uses an AI that is essentially under the guidance and control of the therapist. Trying to turn this instead to using an AI that is “provided” by the client is a much worse option. The downsides exceed the upsides.

A final thought on this topic for now.

Ralph Waldo Emerson famously made this insightful remark: “Unless you try to do something beyond what you have already mastered, you will never grow.” This is true of therapists. Those who are avoiding AI are impeding their future. AI is here to stay. That being said, make sure to embrace AI in the right manner and place, else you could be putting yourself into untoward territory and undesirable jeopardy.

First, do no harm, including to your cherished therapeutic efforts.

Source: https://www.forbes.com/sites/lanceeliot/2025/11/14/asking-your-therapist-to-confer-with-the-ai-chatbot-thats-giving-you-off-the-cuff-mental-health-advice/