Public Perception Of Mental Health Therapists Who Make Use Of AI In Their Practice

In today’s column, I examine the likely public perception of mental health therapists who opt to make use of AI in their practices. The deal is this. Generative AI and large language models (LLMs) are increasingly being used by the public at large to readily engage the AI in therapeutic dialogue and garner mental health advice. Meanwhile, many therapists are incorporating generative AI and LLMs into their service offerings.

Will people come to expect that therapists are indeed astutely blending AI usage amidst their delivery of mental health analyses, or might people instead be worried and be repelled by such use of AI since it seemingly could undercut the empathetic efforts of the human therapist?

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here.

Public Perception Of AI Use By Doctors

A recent research study explored the public perception of physicians who make use of AI in their medical practices. Though this did not focus on the mental health domain, we can leverage the insightful study to recast the approach into the mental health space. In other words, it would be helpful if a similar study were done that concentrated on therapists and mental health professionals.

I’ll say more about this in a moment.

Let’s see what the physician-oriented study had to say. In a research paper published in the Journal of the American Medical Association (JAMA) entitled “Public Perception of Physicians Who Use Artificial Intelligence” by Moritz Reis, Florian Reis, and Wilfried Kunde, JAMA Network Open, July 17, 2025, these key points were made (excerpts):

  • “Little is known about the public perception of physicians who use AI.”
  • “This online study explored how statements on different types of AI use (diagnostic, therapeutic, and administrative) influence the public’s perception of respective physicians.”
  • Participants were shown fictitious advertisements for family doctors that might be encountered on social media or billboards.
  • “We varied between groups whether the advertisement made no statement on AI use (control condition) or mentioned that the respective physician utilizes AI for administrative, diagnostic, or therapeutic purposes.”
  • “Participants rated the presented physician regarding perceived competence, trustworthiness, and empathy, as well as their willingness to make an appointment with the physician on a 5-point scale.”

I’ll address those major points one by one.

Therapists And AI Usage

First, I would assert that the observation that there is little known about the public perception of AI-using physicians is equally applicable to therapists and the mental health profession.

This is partially due to the AI usage only now starting to grow amongst therapists. I’ve been emphasizing that we are gradually going to see a shift from the conventional dyad of therapist-patient to a triad of therapist-AI-patient, see my discussion at the link here. But we aren’t there yet.

AI usage by therapists is still in its infancy.

We could definitely make use of robust studies on those public perceptions. It would help to provide important guidance to therapists. Should they be rapidly adopting AI or slowly doing so? How should they communicate to the public about their AI usage? Are certain types of clients or patients more apt to find AI usage desirable, while others would be repulsed by their therapist relying on AI?

And so on.

Types Of AI Usage By Therapists

Note that the research study identified three types of AI usage, namely diagnostic, therapeutic, and administrative. I’m glad that they wisely chose to divide up how AI usage arises.

This is important for several reasons.

In the case of mental health professionals, we could reasonably assume that if they use AI to aid their administrative tasks, most prospective clients or patients would probably not especially care and otherwise assume or hope that it was streamlining those kinds of mundane chores.

People would expect that AI usage will reduce costs and free up time for the therapist to devote themselves to performing therapy. Only if clients or patients encountered hiccups on the administrative side of booking sessions, handling of billing, and other such tasks might they raise a red flag that the therapist was using AI in this portion of their practice.

The likely primary qualm would be in using AI for diagnostic purposes and also for therapeutic purposes.

Why would people be concerned?

Suppose the therapist entirely hands over the mental health analyses and therapy activities to AI. A client would rightfully be indignant that they went to the trouble to sign up and pay for human personalized services, but it seems that AI is doing all the heavy lifting. The client might as well cut out the middleman, namely, stop seeing the human therapist, and aim to directly use AI on their own.

Therapists would need to articulate a convincing argument that AI is merely an adjunct. It is a tool. The tool will improve the mental health offerings of the therapist. The therapist can work hand-in-hand with AI, having AI available to clients on a 24/7 basis. In that sense, the therapist is still in charge and actively performing therapy. They have chosen to go beyond the normal constraints by adding AI into their mix, sensibly and cautiously, boosting their therapy via AI augmentation.

Public Perception At This Time

The thing is, few of the public likely grasp that kind of prudent justification or solid basis for why a therapist would include AI into the therapeutic elements of their prized work.

The JAMA research study on physicians highlights the same lack of public awareness of how AI usage is potentially employed. Participants generally rated that AI usage by doctors, as indicated in the experiment, tended to imply that such physicians were less trustworthy, less competent, and less empathetic. The logic is undoubtedly that an AI-using doctor is presumably subjugating their medical duties to AI, becoming mentally overly reliant on AI, and preoccupied with tech more so than interacting empathetically with their patients.

The researchers emphasize that physicians opting to use AI in their medical practices should consider being abundantly transparent about why and how they use AI.

Again, if the AI is solely for administrative purposes, make that a clear-cut distinction. Even that can somewhat backfire if the prospective patients are worried that they won’t get a reasonable human-like break from a strict AI, while perhaps a human administrator might be more accommodating.

I would say that the upcoming generations are more prone to wanting AI usage for administrative tasks. Why so? They are comfortable as digital natives with using technology. They also cannot imagine having to get on a phone call with a human administrator to make arrangements for their medical services. Using an online system that is sophisticated via the cogent use of AI would indubitably be preferred, by far.

Leading Edge For Now

What makes the occupation of therapists a bit different than the presumed perception of conventional medical services is that people are already widely using AI for mental health purposes. There are 400 million weekly active users of ChatGPT, the widely and wildly popular generative AI and LLM from OpenAI. I’ve detailed that a segment of those users are making use of the AI for various mental health questions and guidance, see the link here. The same applies to uses of Anthropic Claude, Meta Llama, Google Gemini, etc.

To that extent, some of the general public already realize that AI can be useful in a mental health context.

It isn’t a shocker that a therapist might decide to incorporate AI into their practice. That being said, since this is relatively new as a means of conducting such services, the public will be initially hesitant and perhaps skeptical. Sensibly so.

Bona fide questions they are bound to be contemplating include:

  • Has the therapist suitably dovetailed AI usage seamlessly into their services or just plopped it into their practice without mindful thought on the matter?
  • Will the client still get the requisite personalized attention of the therapist, or might the AI overshadow the expected time and focus of the therapist?
  • How will the AI usage provide sufficient data protection and information privacy; otherwise, the AI might be collecting all manner of personal commentary and allow this to be unduly and improperly publicly exposed?
  • Is the AI usage merely for show and an attempt to increase fees, or does it offer value-added capabilities that will enhance the therapy being conducted?

Therapists who proceed to adopt AI usage will bear the responsibility to explain the why and how to prospective and existing clients. This will be somewhat of an uphill battle initially.

The New Norm Is Coming

Gradually, but certainly not at a snail’s pace, the acceptance of AI as a tool incorporated into the delivery of mental health services will be a widespread norm. It will be the new norm at first.

Eventually, it will be considered an obvious and expected part of all therapy practices. It will become the ho-hum norm. Just about all therapists who are still in business will use AI in one fashion or another. Not using AI at all is going to be a fringe practice.

This sooner-rather-than-later juncture, when the public perception is that therapists are customarily using AI, will shift gears for the mental health profession. The shift will entail differentiating how your use of AI is better than the use of AI by competing therapists. No longer will energy be spent on justifying AI usage at the get-go.

Instead, the emphasis will be on how your AI usage exceeds the AI leveraging by other less savvy and slower-to-adopt therapists.

Getting Started For The Future

A final thought for now.

My recommendation is that any therapist worth their salt, including newbies entering into the profession, ought to already be getting up to speed on AI usage. Don’t wait. History is being made. You can guide your future.

As per the famous words of Dwight D. Eisenhower: “Neither a wise person nor a brave person lies down on the tracks of history to wait for the train of the future to run over them.” Take that sage advice to heart when it comes to the advent and adoption of AI in mental health. You’ll be glad you did.

Source: https://www.forbes.com/sites/lanceeliot/2025/10/14/public-perception-of-mental-health-therapists-who-make-use-of-ai-in-their-practice/