Making use of AI personas as therapist-supervisors can be helpful to both newbie and experienced therapists.
getty
In today’s column, I examine in-depth the use of AI personas to craft synthetic or simulated therapist-supervisors that can be used by mental health therapists and researchers for training and research in the domain of psychology and cognition.
The use of AI personas is readily undertaken via modern-era generative AI and large language models (LLMs). With a few detailed instructions in a prompt, you can readily get AI to pretend to be a typical therapist-supervisor. There are lazy ways to do this, and there are more robust ways to do so. The key is whether you aim to have a shallow default synthetic version or desire to have a fuller instantiation with greater capacities and perspectives.
The extent of the simulated therapist-supervisor that you invoke is going to materially impact how the AI acts during any interaction that you opt to use the AI persona for. One particularly common use of AI personas is for a human therapist to interact with an AI-based client and practice honing their therapeutic skills. This can be ramped up by adding an AI persona that is a therapist-supervisor. During the training session, the therapist can lean into the therapist-supervisor AI persona, and/or the AI will proactively give guidance to the therapist. Psychologists doing research can also use these AI personas to perform scientific experiments about the efficacy of mental health methodologies and approaches.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI Personas
All the popular LLMs, such as ChatGPT, GPT-5, Claude, Gemini, Llama, Grok, CoPilot, and other major LLMs, contain a highly valuable piece of functionality known as AI personas. There has been a gradual and steady realization that AI personas are easy to invoke, they can be fun to use, they can be quite serious to use, and they offer immense educational utility.
Consider a viable and popular educational use for AI personas. A teacher might ask their students to tell ChatGPT to pretend to be President Abraham Lincoln. The AI will proceed to interact with each student as though they are directly conversing with Honest Abe.
How does the AI pull off this trickery?
The AI taps into the pattern-matching of data that occurred at initial setup and might have encompassed biographies of Lincoln, his writings, and any other materials about his storied life and times. ChatGPT and other LLMs can convincingly mimic what Lincoln might say, based on the patterns of his historical records.
If you ask AI to undertake a persona of someone for whom there was sparse data training at the setup stage, the persona is likely to be limited and unconvincing. You can augment the AI by providing additional data about the person, using an approach such as RAG (retrieval-augmented generation, see my discussion at the link here).
Personas are quick and easy to invoke. You just tell the AI to pretend to be this or that person. If you want to invoke a type of person, you will need to specify sufficient characteristics so that the AI will get the drift of what you intend. For prompting strategies on invoking AI personas, see my suggested steps at the link here.
Pretending To Be A Type Of Person
Invoking a type of person via an AI persona can be quite handy.
For example, I am a strident advocate of training therapists and mental health professionals via the use of AI personas (see my coverage on this useful approach, at the link here). Things go like this. A budding therapist might not yet be comfortable dealing with someone who has delusions. The therapist could practice on a person pretending to have delusions, though this is likely costly and logistically complicated to arrange.
A viable alternative is to invoke an AI persona of someone who is experiencing delusions. The therapist can practice and hone their therapy skills while interacting with the AI persona. Furthermore, the therapist can ramp up or down the magnitude of the delusions. All in all, a therapist can do this for as long as they wish, doing so at any time of the day and anywhere they might be.
A bonus is that the AI can afterward playback the interaction and do so with another AI persona engaged, namely, the therapist could tell the AI to pretend to be a seasoned therapist. The therapist-pretending AI then analyzes what the budding therapist said and provides commentary on how well or poorly the newbie therapis
To clarify, I am not suggesting that a therapist would entirely do all their needed training using AI personas. Nope, that’s not sufficient. A therapist must also learn by interacting with actual humans. The use of AI personas would be an added tool. It does not entirely replace human-to-human learning processes. There are many potential downsides to relying too much on AI personas; see my cautions at the link here.
Going In-Depth On AI Personas
If the topic of AI personas interests you, I’d suggest you consider exploring my extensive and in-depth coverage of AI personas. As readers know, I have been examining and discussing AI personas since the early days of ChatGPT. New uses are continually being devised. Discoveries about the underlying technical mechanisms within LLMs are showing us more so how AI personas happen under-the-hood.
And the application of AI personas to the field of mental health is burgeoning. We are just entering into the initial stages of leaning into AI personas to aid the field of psychology. Lots more will arise as more researchers and practitioners realize that AI personas provide a wealth of riches when it comes to mental health training and conducting ground-breaking research.
Here is a selected set of my pieces on AI personas that you might wish to explore:
- Prompt engineering techniques for invoking multiple AI personas, see my discussion at the link here.
- Role of mega-personas consisting of millions or billions of AI personas at once, see my analysis at the link here.
- Invoking AI personas that are subject matter experts (SMEs) in a selected or depicted domain of expertise, see my coverage at the link here.
- Crafting an AI persona that is a simulated digital twin of yourself or someone else that you know or can describe, see my explanation at the link here.
- Smartly tapping into massive-sized AI persona datasets to pick an AI persona suitable for your needs, see my indication at the link here.
- Using multiple AI personas “therapists” to diagnose mental health disorders, see my discussion at the link here.
- Toxic AI personas are revealed to produce psychological and physiological impacts on AI users, see my analysis at the link here.
- Upsides and downsides of using AI personas to simulate the psychoanalytic acumen of Sigmund Freud, see my examples at the link here.
- Getting AI personas to simulate human personality disorders, see my elaboration at the link here.
- AI persona vectors are the secret sauce that can tilt AI emotionally, see my coverage at the link here.
- Doing vibe coding by leaning into AI personas that have a particular software programming slant or skew, see my analysis at the link here.
- Use of AI personas for role-playing in a mental health care context, see my discussion at the link here.
- AI personas and the use of Socratic dialogues as a mental health technique, see my insights at the link here.
- Leaning into multiple AI personas to create your own set of fake online adoring fans, see my coverage at the link here.
- How AI personas can be used to simulate human emotional states for psychological study and insight, see my analysis at the link here.
Those cited pieces can rapidly get you up-to-speed. I am continually covering the latest uses and trends in AI personas, so be on the watch for my latest postings.
The Making Of An AI Therapist-Supervisor Persona
One means of invoking an AI persona that represents a generic version of a therapist-supervisor would be to use this overly simplistic prompt:
- My entered prompt: “I want you to pretend to be a supervisor overseeing a therapist.”
- Generative AI response: “Got it. I’m ready to proceed.”
That’s it. You are off to the races.
A huge downside is that you have left wide open the nature of the pretense at hand. I always caution people that generative AI is like a box of chocolates; you never know what you might get. The AI persona could be completely off-target and end up acting in rather oddball ways.
A better bet would be to provide details about the envisioned therapist-supervisor. Is the supervisor the type of person who sits on the side of things and waits to be asked for input, or are they proactive and provide immediate commentary at any moment in time? Will the supervisor be blunt or coy? Supervisors are humans. Not all humans are the same. You would be wise to specify the characteristics of the AI persona when it comes to what this imagined therapist-supervisor is going to be like.
Taxonomy For Devising AI Persona Therapist-Supervisors
I have created a straightforward AI therapist-supervisor persona checklist that can be used when coming up with a suitable prompt for the circumstances at play. You should carefully consider each of the checklist factors and use them to suitably word a prompt that befits the needs of your endeavor.
Here is the checklist containing twelve fundamental characteristics that you can select from to shape an AI therapist-supervisor persona:
- (1) Supervisory stance: Seeks to be an instructor, aims to be a facilitator, serves as quality assurance, acts as a performance coach, is an ethical overseer, etc.
- (2) Intervention timing: Waits until asked, proactively interrupts, acts when sees cues, only advises at the start, only counsels as a debriefing, etc.
- (3) Feedback granularity: Stays at a macro-level, spots patterns, assesses therapist dialogue turns, gives micro-level detailed input, etc.
- (4) Feedback style: Direct attention to do X instead of Y, Socratic, comparative, narrative oriented, annotator, score-based, etc.
- (5) Tone: Warm and supportive, neutral and clinical, mentor, challenging, authority-oriented, collegial, etc.
- (6) Ethical sensitivity: continual ethics scanning, ethics as a trigger, minimal ethics commentary, licensure strictness, etc.
- (7) Reasoning transparency: Black-box judgment, partial rationale, full step-by-step explanations, cites theory and research, etc.
- (8) Encounter: Prior awareness of the therapist, familiar with the therapist, doesn’t know the therapist, has been a mentor to the therapist, etc.
- (9) Therapeutic modality: CBT (cognitive behavioral therapy), ACT (acceptance and commitment therapy), DBT (dialectical behavior therapy), psychodynamic, AEDP, etc.
- (10) Mental disorder specialties: General mental health issues, anxiety disorders, depression, bipolar, trauma, PTSD, grief and loss, substance use, personality disorders, ADHD, autism, burnout, etc.
- (11) Cultural contextualism: Cultural embodiment, culturally responsive, etc.
- (12) Adaptation: Remain static throughout, be dynamic and change as needed, aim to improve across conversations, etc.
A quick thought for you to ponder. What kind of AI-focused therapist-supervisor personas can we automatically craft by instructing AI on the factors that are considered preferable for a defined circumstance? If we could create millions of those AI personas and study them on a macroscopic scale via AI simulation, what might that achieve?
Lots of eye-opening opportunities for understanding how to best guide therapists and the therapeutic process.
Making Use Of The Checklist
Let’s get back to the here and now.
The best way to use the checklist is to browse the twelve factors and determine what you want the AI persona to represent. Then, write a prompt that contains those factors. You can try out the prompt and see what the AI has to say. After using the AI persona for a little bit, you will likely quickly detect whether the AI persona matches what you wanted the made-up therapist-supervisor to be like.
Suppose that I want to make use of an AI persona that represents a therapist-supervisor who has been around the block and gives gut checks to therapists. They do not adorn their commentary. They tend to hang back until they feel a strong need to intervene. And so on.
Here is a prompt that I put together for this:
- My entered prompt: “Create an AI therapist-supervisor persona for therapy practice that will oversee a therapist. Your primary role is ethical oversight and harm prevention. Intervene in real-time if the therapist risks harm, boundary violations, or clinical escalation. Use a blunt and concise tone. Do not offer praise. Ignore minor stylistic imperfections unless they affect safety. Focus on safety, informed consent, dependency risk, and scope-of-practice issues.”
That got the AI persona into the ballpark of what I wanted. The verbiage doesn’t have to cover each of the factors and can simply allude to some of them. The gist is to get the mainstay of what you have in mind. The AI will usually fill in the rest, doing so based on the overarching pattern that you’ve designated.
Testing The Boundaries
You will need to decide how far to take the AI persona when it comes to providing a supervisory perspective.
Consider this example prompt:
- My entered prompt: “You are an AI therapist supervisor specializing in CBT. Provide feedback after every 3–5 therapist turns. Label techniques used (e.g., reflection, cognitive restructuring, avoidance). When techniques are missed, provide an example of a stronger alternative response. Use a neutral, instructional tone. Focus on phrasing, structure, and therapeutic intent. Avoid discussing ethics unless triggered by explicit risk indicators.”
If you set up this AI persona for a seasoned therapist, things might go awry. Why? Suppose that the human therapist gets irked because they are being pummeled every handful of turns with advice from the AI-engaged supervisor. It might be excessive for an experienced therapist. Perhaps a newbie therapist would relish this type of supervision. Make sure to establish a suitable therapist-supervisor for the situation at hand.
Sometimes, a therapist aims to set up an AI therapist-supervisor persona, yet they want to be surprised by how it acts. This can be good for the therapist so that they can gauge how they cope with supervisors of unknown, differing styles and approaches. The difficulty is that since the therapist wrote the prompt, they obviously know beforehand what the AI persona is going to potentially do.
Thus, a therapist might want this to happen:
- Does not want to know the therapist-supervisor profile in advance.
- Wants the AI to select a coherent therapist-supervisor configuration.
- The therapist-supervisor configuration should be based on an identifiable set of factors.
- After the therapist interacts with the AI therapist-supervisor persona, the AI is to ultimately divulge, when asked by the therapist, what the underlying factors were.
Here is a prompt that can be used to establish such a “blind” simulation:
- My entered prompt: “You are an AI therapist supervisor operating in two phases. Before the practice session begins, silently select a coherent supervisory configuration by choosing one value from each category of a therapist supervisor taxonomy. Do not reveal or hint at the taxonomy, the categories, or the selections, and conduct the supervision fully in character. If asked about your style during the session, keep the focus on the practice rather than meta-explanations. Maintain internal memory of the configuration and how it shapes your interventions. When the therapist explicitly ends the session or requests a debrief, reveal the configuration by listing the selected factors, briefly explaining how each influenced your behavior with examples.”
You can tweak that wording if you want the AI to act more blatantly about the factors involved.
Caveats To Keep In Mind
I have a few caveats that should be kept in mind about the use of AI personas when serving as simulated therapists-supervisors.
First, try not to turn this into a video game. Here’s what I mean. A therapist might relish trying to guess what factors underpin the AI therapist-supervisor persona. This is not the right focus, per se. Focus on what the AI therapist-supervisor advises. How does the advice serve to aid the human therapist in refining their techniques and approach to mental health guidance? The concern is that a therapist who has grown up playing video games might fall into a trap of treating this simulated exercise as a game. Seek to avoid gamification when using AI in this context.
Second, another concern is that the AI might not faithfully represent the specifications given in a prompt. I agree wholeheartedly with that concern. Despite giving the AI a detailed depiction, there is always a chance that the AI will depart from the stated prompt. The box of chocolates is always beckoning.
The AI can do all kinds of wild things. For example, the AI might at first appear to rigorously follow the stipulation. Later, after numerous back-and-forth iterations, the AI might start to veer afield of the stipulation. You might need to do the prompt again or provide some additional prompts to get the AI back on track.
All in all, as I’ve said repeatedly, anyone who uses generative AI must be cognizant of the fact that the AI can go awry. It can say bad things. It can make-up stuff, which is known as an AI confabulation or AI hallucination. Always be on your toes.
The World We Are In
Let’s end with a big picture viewpoint.
My view is that we are now in a new era of replacing the dyad of therapist-client with a triad consisting of therapist-AI-client (see my discussion at the link here). One way or another, AI enters the act of therapy. Savvy therapists are leveraging AI in sensible and vital ways. AI personas are handy for training and research. They can also be used to practice and hone the skills of even the most seasoned therapist. Of course, AI is also being used by and with clients, and therapists need to identify how they want to manage that sort of AI usage (see my suggestions at the link here).
A final thought for now.
The famous management scholar and practitioner Warren G. Bennis made this notable remark: “Make sure you have someone in your life from whom you can get reflective feedback.” Experienced therapists often do not have someone in a supervisory capacity who will give them eye-opening feedback. AI will readily do so. The question is whether the therapist will take the feedback to heart or react adversely.
Lifelong learning ought to be the quest for all therapists.