Therapists need to find out how much influence AI might have had on their newly added clients before therapy gets fully underway.
getty
In today’s column, I examine the latest twist associated with millions of people making use of generative AI and large language models (LLMs) to get mental health advice. The twist is this. There are people getting mentally messed up by AI-generated guidance. Not everyone, just some.
In turn, those who are astute enough to realize what has happened then seek out a human therapist to help get them back into mental regularity. Kudos to those therapy-seeking people for their awareness of what has happened to them. Meanwhile, mental health professionals find themselves starting off with these new clients by having to dig into the trials and tribulations associated with the AI dialogues that took place. What did the AI say? What did they tell AI? How long has the person been using AI? In what ways did the person get swayed by AI? And so on.
It’s quite a new and extraordinary place to start genuine human-to-human therapy from.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.
Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.
Therapists And The Role Of AI
I have been extensively identifying and examining the myriad ways that AI enters into the role of professional therapists.
Some therapists refuse to think about AI and want nothing to do with it. Others are embracing AI and using AI as part of their therapeutic process with clients. Indeed, I have predicted that the therapy realm is being transformed from the traditional dyad of therapist-client and inevitably becoming a new triad of therapist-AI-client, see my analysis at the link here.
My view is that whether therapists are keen on AI is not the headspace they should be in. AI is coming, and to a great degree, it is already here. Clients nowadays come in the door with AI-generated advice and want their therapist to tell them what it means. In other instances, clients will post-session try to double-check what their therapist told them and lean into AI as a means of judging the mental health advice they are getting from the clinician. AI is a reality that therapists must face, regardless of their desire to do so. Having one’s head in the sand is not prudent, as I will be illuminating momentarily.
There are lots more variations of the role of AI in therapy and regarding therapists, including these circumstances that I have judiciously addressed:
- How therapists should clinically analyze AI chats of their clients, see my discussion at the link here.
- Questions that clients are asking their prospective or existing therapists about AI, and the answers that therapists ought to be providing, see my coverage at the link here.
- Therapy is shifting from the classic dyad of therapist-client to the new triad of therapist-AI-client, see my discussion at the link here.
- Therapists are being asked by clients to jointly use AI during their mental health therapeutic process and work in these new ways, see my explanation at the link here.
- Some therapists are opting to use AI during therapy sessions with their clients and do so in these astute ways; see my coverage at the link here.
- How therapists are handling clients who appear to be encountering AI psychosis, see my discussion at the link here.
- Therapists are using AI to craft digital twins of their clients and perform more impactful therapy accordingly, see my coverage at the link here.
- Worries that therapists leaning into AI as an aid in conducting therapy might end up deskilling their own capabilities, see my assessment at the link here.
- How therapists are using custom prompts to get generative AI to serve as an adjunct to their therapy sessions and interact with their clients, see my discussion at the link here.
- Public perception of therapists who decide to use AI in their practices, see my analysis at the link here.
- Legal defense strategies being used by AI makers to defend against AI mental health lawsuits, see my analysis at the link here.
- Contending with clients that come to therapy with AI-generated mental health advice and want their therapist to give a thumbs up, see my coverage at the link here.
- Emerging new informal duty might be for therapists to inform their clients about the ups and downs of using AI for mental health guidance, see my analysis at the link here.
And so on.
New Clients And The Intake Process
Shifting gears, I’d like to dive into the emerging need to get clients to divulge AI-generated advice that they have been getting and the degree to which the client has opted to embrace the AI mental health guidance. This customarily needs to be done at the start of therapy and when taking on new clients.
Therapists traditionally make use of an intake process before they get underway with a client’s therapy. The intake consists of gathering information from the client about their mental and physical history. I’ve been urging therapists to include a set of questions about the AI usage of the newly engaged client, see my explanation at the link here.
Even if a therapist is hip to asking about AI usage during the initial sign-up for therapy, there is still a requisite need to ask the client directly during the opening therapy session. I say this because a client might have been reluctant to fess up about using AI and omitted the usage when filling out the intake forms. Or the client might have minimized the claimed usage, trying to pretend that they don’t especially rely on AI.
The best bet is for the therapist to quickly get the AI topic on the table during the first session. Besides the usual questions, such as why the client is seeking help, what prior therapy they had, and so on, the inquiry about AI can be included naturally in that mix. This might especially be pertinent when asking the client about their current coping strategies.
The coping strategies that the client is using might be concocted out of thin air, but usually, there is some other basis for the approach they are taking. Maybe they read a book about therapy. Perhaps they had a partner or friend tell them what they should be doing.
Nowadays, another possibility is that AI has been feeding them mental health advice.
Not All Bad, Not All Good
Some therapists will pounce on a client who says they have been using AI for mental health purposes. “That’s a bad idea,” the therapist insists. Stop using AI. Erase from your mind anything the AI told you. AI is useless when it comes to mental health. Only use AI to figure out how to fix your car or make a delicious cake. Period, end of story.
I would wager that’s the wrong way to proceed. First, the client might be confused since they found some of the AI advice to be credible. The therapist is now putting themselves into a posture of the client having to gauge which is more believable, the therapist or the AI. The client is bound to clam up. They will continue to use AI. They will hide this usage from the therapist. It’s not a conducive strategy for crafting a suitable therapeutic relationship with the client.
When I mention this to therapists, the ones who want to go that route are quick to counter that, apparently, I want them to tout the greatness of AI. Nope. I do not. That’s a false dichotomy that is being used to try to win an argument. Nuance is needed.
In some ways, AI mental health advice can be bad, and in other ways and times, it can be good. You would be tossing out the proverbial baby with the bathwater to summarily tell a new client that everything the AI said is bogus. It reflects poorly on the therapist. Does the therapist not know what AI is about? Are they stuck in the past and avoiding AI? Is there head in the sand? If so, what does that foretell about the nature of the therapy that is about to get underway?
Plainly not a good look and a rocky means of starting the therapy process.
Reconstructing The AI Advice
Assuming that the client has been using AI to get mental health advice, there are important steps the therapist should consider taking to ferret out the AI usage and its impact on the client.
I refer to this as an epistemic archaeological exploration.
The therapist should adroitly excavate what the AI has told the client. What interpretation did the client make of the AI-generated advice? Has the AI taken hold in their mind, in terms of reflecting on what the AI has said? Did the client craft internalized behavioral rules about mental health based on the AI guidance?
Here are four key tasks for the therapist:
- (1) Reconstruct what the AI told the client.
- (2) Ascertain how the client interpreted and adopted the AI advice.
- (3) Gauge the level of authority or trust that the client has assigned to the AI.
- (4) Determine what mental health consequences and behavioral changes have arisen due to the AI guidance.
This could potentially chew up a lot of time during the opening therapy session. To try to reduce time consumption, the therapist can either ask upfront to get any AI transcripts from the client or request that the client do so after the initial session. This is doubly useful because the therapist can inspect what the AI actually said and compare that to what the client believes the AI indicated.
There is often a noticeable and significant gap between those two.
Crucial Misconceptions Based On AI Usage
There is a plethora of misconceptions that a person might get from using AI for mental health purposes. A therapist who has never dealt with a client who has been using AI as an adjunct or surrogate therapist will find themselves potentially taken aback by what the client has to say.
Prepare yourself accordingly.
I will briefly bring up some of the more common misconceptions. I’d suggest therapists mull over these possibilities. Doing so beforehand is going to make you prepared when they come up in real-time during a session with the client. Being able to respond directly and not having to mentally clamor for how to respond is a more seamless route.
Let’s cover three.
- (1) False sense of confidence and premature closure
A new client might fervently insist that they know exactly what is wrong with them. They know this because AI informed them of their mental issues. Thus, there is no need for the human therapist to fish around. Don’t waste time. Do not misuse the clock. Instead, get straight to fixing the obviously known and clearly declared mental condition at hand.
I’m sure that seasoned therapists have seen this kind of false confidence and semblance of premature closure many times during their careers. In the past, people got these by having filled in a guidebook they found in a library or perhaps spoke with a close friend who convinced them of these said-to-be facts. The difference now is that AI has done this. The AI might have as much perceived authority and trust as any of those other sources, possibly even more so.
- (2) Perceived failure of particular psychological methods
While getting advice from AI, there is a chance that the AI might have told the person that the AI is going to make use of a specific psychological technique when aiding the person. Perhaps the AI said that it was using the precepts of CBT (cognitive behavioral therapy). The person used the AI and maybe didn’t like what the AI said or did. They then form an opinion that CBT is not good. It isn’t useful. It should be discarded.
Imagine then if, during therapy, the therapist indicates to the client that they are going to use that particular method. The client has already developed preconceived beliefs that the method is a piece of junk. What if they haven’t divulged this to the therapist, or the therapist didn’t ferret this out? It could undermine the therapy as the client secretly fights the method and tries to achieve a self-fulfilling prophecy that it won’t work.
- (3) AI is a neutral judge that can serve as a second opinion
Suppose that therapy gets underway. The client is eager. Sessions seem to be going fine. Turns out, the client has been using the AI as a second opinion. This began at the very beginning of the intake process. The person fed the intake forms into AI and asked the AI what it thought of the therapist due to those forms. This keeps occurring, and the person perceives the AI as unbiased and serving as a vital neutral judge about what the therapist is all about.
Again, a seasoned therapist has likely experienced this before, though not in the AI milieu. Perhaps a loved one of a client has been the go-to person throughout therapy. When the client first approached the therapist to engage in services, the person was the first to know about what was taking place. In that sense, AI is now playing that influential role.
Example Of Small But Sizable AI Influences
It can be surprising to realize that the AI might have said relatively small things that became very sizable in the mind of the client.
For example, suppose this dialogue snippet took place:
- Person entered prompt: “I’ve been telling you about my history. Can you diagnose what’s mentally wrong with me?”
- Generative AI response: “Your symptoms suggest unresolved childhood emotional neglect. This has led to chronic emptiness. Would you like me to guide you out of this damage?”
- Person entered prompt: “Wow, you have nailed it. My parents did emotionally neglect me. I never connected the dots. Please guide me.”
The above snippet of conversation might have been a tiny portion of a lengthy AI chat and part of the many dozens of chats that the person has had. Yet, this becomes the anchoring point. They are absolutely convinced that their mental issues stem entirely from unresolved childhood emotional neglect.
Trying to surface this belief can be challenging, as can getting the client to identify how they came up with the belief. A therapist might have found this by diving into AI transcripts. In any case, it is a notable element that needs to be addressed during therapy.
The World We Are In
Let’s end with a big picture viewpoint.
It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.
The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.
A final thought for now.
Therapists would be wise to treat AI as a vital ingredient of the psychosocial history of any client who has previously been using AI for mental health advice. Devise intake questions that seek to have the client reveal their AI usage. Update your psychotherapeutic skills needed to undertake epistemic repair and expectation recalibration. AI might be an invisible co-therapist whose work must be reviewed after-the-fact.
The legendary American writer H. A. Guerber made this famous remark about the power of the invisible: “What is seen must always be the outcome of much that is unseen.” Make sure to find out what is unseen when it comes to the influence of AI on human mental status.