Therapists Becoming Deskilled By Relying On AI To Do The Bulk Of Mental Health Therapy For Clients

In today’s column, I examine whether therapists who utilize generative AI and large language models (LLMs) as a therapeutic tool to assist their clients and patients are inadvertently undercutting their own mental health advisory prowess. There is a hunch floating around that perhaps therapists are becoming deskilled due to reliance on AI. This is presumably happening in a subtle and non-obvious fashion.

Therapists valiantly seeking to leverage modern AI as a powerful adjunct to their service offerings are supposedly and unknowingly falling into the trap of losing a handle on their core therapeutic skills, leading to a precipitous decay in their mental health guidance acumen.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Therapists And AI Usage

Many therapists and mental health professionals are opting to integrate AI into their practices and overtly use the AI as a therapeutic adjunct for their clients and patients (see my coverage at the link here).

Even those that don’t go down the route of incorporating AI are bound to encounter clients and patients who are doing so. Those clients and patients will often walk in the door with preconceived beliefs about how their therapy should go or is going, spurred and prodded by what AI has told them.

In this sense, one way or another, therapists and mental health professionals are going to be impacted by the nonstop advances in AI. Right now, there are around 700 million weekly active users of ChatGPT, and numerous hundreds of millions too for competing AIs such as Claude, Gemini, Llama, etc. Importantly, the most prominent use of generative AI is for mental health advisement, see my coverage of the AI usage rankings review at the link here. Estimates suggest we might soon have billions of people worldwide tapping into AI for therapy insights and guidance, see my assessment at the link here.

People want their therapist to be up to date. AI is here. Everyone can see this. They would rather choose a therapist who acknowledges the emergence of AI and guides their clients in how to sensibly and suitably use AI. The prudent perspective is that therapists desiring a fruitful future are going to realize that they are better off by embracing AI rather than fighting AI.

For more ins and outs, see my analysis at the link here.

Questions Of Human Deskilling

Let’s shift gears and consider the recent angst that the use of AI is allegedly leading to a deskilling of doctors in various specialized medical domains. I will then tie this facet to the realm of therapy and mental health professionals.

A newly released research study has caused quite a stir by asserting that there is deskilling taking place in the medical profession due to AI. The study entitled “Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicenter, observational study” by Krzysztof Budzyń, et al, The Lancet: Gastroenterology and Hepatology, August 12, 2025, made these salient points (excerpts):

  • “It is not known if continuous exposure to artificial intelligence (AI) changes endoscopists’ behavior when conducting colonoscopy.”
  • “We conducted a retrospective, observational study at four endoscopy centers in Poland taking part in the ACCEPT (Artificial Intelligence in Colonoscopy for Cancer Prevention) trial. These centers introduced AI tools for polyp detection at the end of 2021, after which colonoscopies had been randomly assigned to be conducted with or without AI assistance according to the date of examination.”
  • “We evaluated the quality of colonoscopy by comparing two different phases: 3 months before and 3 months after AI implementation.”
  • The primary outcome was the change in adenoma detection rate (ADR) of standard, non-AI-assisted colonoscopy before and after AI exposure. Continuous exposure to AI might reduce the ADR of standard non-AI-assisted colonoscopy, suggesting a negative effect on endoscopist behavior.”

Allow me to briefly noodle on the nature and results of that study.

Whereas prior research had indicated that AI boosted detection rates by the endoscopists, this latest study came to the nearly opposite conclusion. The methodological approach was this. Suppose that AI is provided and then later taken away. If the detection rate by the specialists drops after the AI has been taken away, the question arises as to why that would happen.

One claim is that the workers became deskilled.

They had relied on the AI to do their hard work for them. This led to a decay in their prowess. Thus, when the AI was taken away, they no longer could perform at the same levels they had at hand before the AI usage.

Please know that there is controversy associated with the intriguing analysis, and various alternative viewpoints and interpretations are being bandied around.

Analogies Abound

A cogent argument in favor of the conclusion that deskilling has occurred is that the situation would be seemingly analogous to the common use of GPS mapping systems. People previously used paper maps and had to do the mental exertion of figuring out which roads to drive to get to their desired destination. GPS comes along, and people can just tell the AI to devise a route for them.

No longer is there a need to strenuously think about choosing roadways and arduously laying out a planned route. The AI does that for you. Easy-peasy. The argued outcome is that people have deskilled themselves into no longer being proficient at mapping out driving routes. They have generally lost that skill.

That certainly seems like a compelling argument.

Not everyone is necessarily convinced of the logic involved and whether this all translates into other realms. For example, you might declare that GPS is primarily a visually oriented task, as is the work of the medical professional in the above-noted research study. Maybe the deskilling is based on visual skills being eroded, but not on tasks involving non-visual efforts per se.

It could be that an overgeneralization is taking place and extends beyond the scope and nature of the study involved.

I’m not going to try and sort that out here and will, for the moment, go along with the idea that perhaps deskilling of specialized skills can arise by becoming reliant on AI. We shall take that assertion at face value for purposes of exploring the rather heady topic.

Therapist Deskilling At Issue

Envision that a therapist has assigned the use of AI to their clients and patients, doing so as an adjunct to the mental health treatment underway. The clients and patients can access the AI during the in-between times of whenever the human-led sessions occur. If someone suddenly needs a shoulder to lean on, they log into the AI. It might be midnight and otherwise well beyond the conventional working hours of visiting their human therapist.

At some point, a client or patient comes to one of the sessions with the human therapist and says that the AI gave this or that guidance.

Possibly, the therapist might readily acquiesce to whatever the AI had to say. The therapist is allowing themselves to become passive in the act of therapy. It is as though they are now a supervisor over the AI use for therapy, simply giving a nod to what the AI has stated. The reflexive diagnostic thinking by the therapist is no longer especially being applied.

Assume that this happens on a regular basis, time after time. The therapist inches themself towards the less mentally taxing role of supervising treatment. They don’t engage their true therapist acumen. Step by step, this lack of using their therapeutic prowess causes a decay in that skill base. Without consciously realizing what has been taking place, their core competency has ominously degraded.

Boom, drop the mic.

Not A Foregone Conclusion

Does this insidious form of therapist acumen loss have to occur?

Nope.

Those who summarily proclaim that therapists are doomed to lose their guidance prowess due to AI are making a shaky and generally false accusation.

First, to clarify, can it happen? I would say yes, this seems a real possibility. If a therapist is taking on more and more clients because they believe that leveraging AI allows them to do so, it could readily be the case that the therapist would increasingly rely on the AI. This might be due to volume and not having the human bandwidth to suitably maintain a heightened base of clients and patients.

There is also the silent usurping that a therapist might not discern is taking place. In essence, sessions devolve into mainly discussing what the AI told the client. Therapy is taking a backseat. The client has AI on their mind. The therapist is somewhat undercutting their role in driving the therapy sessions. That’s another real possibility.

The good news is that an astute therapist doesn’t have to fall into this trap.

Staying Out Of The Quicksand

Therapists who opt to leverage AI will need to be mindful of several key aspects.

They must prod themselves to remain fully engaged throughout the treatment and especially so during human-to-human sessions with their clients and patients. Do not allow the AI usage to overshadow the therapy. Don’t let the client drive the precious face-to-face time with undue aspects about AI. Keep focused on the therapy at hand.

A handy means to catch yourself and not be inched into a therapy prowess degradation would be to always carefully review your session notes. Most therapists already do a review of their notes. But they aren’t likely to be looking for signs that perhaps they have done less therapy or become distracted by the AI sidelines.

The other crucial aspect is to ensure that you keep the upkeep of your therapy skills as a top priority. In that sense, even if you are at times being less passive during some of your sessions, you can proceed to boost your skills outside of the sessions. For each inch that might be degrading your skills, aim to maintain and advance your skills by an inch or more.

In short, make sure these three pieces of advice are at top of mind:

  • (1) Keen awareness and action. Therapists need to be aware that AI usage as their therapeutic adjunct could potentially allow them to incur a degradation in their mental health skills, and ought to take overt and explicit efforts to try and prevent the decay from happening.
  • (2) Review session notes and self-reflect. Therapists should use their session notes as a means of detecting whether they are silently falling into the trap, and self-reflect on how to contend with the pending issue.
  • (3) Explicit upkeep of skills. It is always a good practice to maintain and upkeep your therapeutic skills, which perhaps becomes even more vital in light of the advent of AI for mental health guidance.

The Future Is Near

I’ve repeatedly emphasized that we are inexorably moving from the classic therapy-client dyad to becoming a new triad of therapist-AI-client, see my detailed discussion at the link here. The inclusion of AI is not a free lunch.

Therapists need to judiciously understand how to balance AI in the emerging triad. AI has big benefits in this context. AI also has many gotchas and downsides.

When I mentioned that therapists need to ensure that their mental health prowess is maintained and even enhanced, there is a bit of irony that this can be done via the proper use of AI. Therapists can use AI as a means of upping their therapeutic skills. For example, having an LLM take on a persona as a simulated patient can be a notably useful and risk-free way of exercising the mental muscles of therapy, see my coverage at the link here.

The Deskilling Falsehoods

A final thought for now.

Do not let the ill-informed convince you that we must ban the use of AI by therapists due to a claimed deskilling of therapists’ skills. The use of AI in nearly all contexts is a dual-usage conundrum. It can be of immense advantage. It can be poorly managed and lead to undesired results.

Leverage the upsides, mitigate the downsides.

As Plato famously stated: “The first and the best victory is to conquer self.” This is still monumentally valid, including and perhaps especially as AI becomes ubiquitous in all facets of our lives, professionally and personally.

Source: https://www.forbes.com/sites/lanceeliot/2025/09/20/therapists-becoming-deskilled-by-relying-on-ai-to-do-the-bulk-of-mental-health-therapy-for-clients/