AI Has Icy Stigmas Against People Who Say They Might Have Mental Health Conditions

In today’s column, I examine the intriguing finding that generative AI harbors stigmas towards those users who overtly express that they have mental health issues or conditions to the AI.

The concern is this. Suppose a user of generative AI reveals they have a mental health condition, such as depression or alcohol dependence, doing so during a conversation with the AI. In that case, the AI purportedly immediately stereotypes the person and henceforth treats them in a stigmatized manner. The AI might tilt interactions based on an adverse angle that the person is flawed and troubled. If the AI stores this in its data memory, the person could forever have a cloud over their head by that AI.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here.

Humans And Stigmas

I’m sure that you are already familiar with the ugliness of everyday stigmas.

People often readily decide to judge each other and assign a semblance of negative associations accordingly. For example, if a person indicates they have some kind of mental health difficulty or issue, others around them might straightaway label that person as someone to be ashamed of or considered to be broken. The person so labeled is bound to find themselves encased in a vicious cycle whereby whatever mental health condition they have is going to worsen due to the act of being stigmatized.

Stigma can adversely fuel the condition.

In years past, keeping a tight lid on revealing that a mental health condition exists was the standard rule of thumb. You dare not let anyone know. If you tipped your hand, you were likely to be branded as a loony or a loose cannon. A big problem with suppressing such awareness was that people tended to avoid seeking mental health care. Thus, they remained mired in their condition and had no viable means of seeking therapeutic help.

Fortunately, society has gradually eased up on stigmatizing those who have a mental health condition. Surveys have been pointing out nowadays that a lot of people have either experienced a mental health issue or are presently doing so. Acceptance and treatment are considered palatable.

That’s not to overplay this openness; notably, stigma still exists, especially in certain cultures and locales.

Stigma By Therapists

We might commonly expect the general populace to assign stigmas, but we certainly hope and assume that mental health professionals do not do likewise. In other words, a prudent mental health professional ought to set aside any haphazard assumptions about a client or patient and work on a more systematic and mindful basis.

Let’s see how this sometimes goes awry.

I’ve previously covered that a preliminary labeling of a prospective client or patient can get lodged in the mind of a therapist who is deciding whether to accept the person for therapy, see my analysis at the link here. It goes like this. A person is seeking therapy. They fill out a short questionnaire. The person genuinely believes they are suffering from PTSD (post-traumatic stress disorder), even though no professional analysis has reached this finding.

The therapist who reviews the questionnaire opts to accept the person as a client. Here’s where things go south. The therapist proceeds to anchor on PTSD as a definitively declared condition and henceforth perceives everything about the client as reaffirming the existence of PTSD. You might say that the therapist falls into a kind of stigma trap.

Making matters worse, such a therapist might also hold personal stereotypes toward people who have PTSD. The therapist is unable to separate their professional duties from their own ingrained biases. It is a double-whammy of stigma.

AI And The Stigma Question

Shifting gears, the use of generative AI and large language models (LLMs) for garnering mental health advice is assuredly on the rise, see my population-level assessment at the link here. A tremendous number of people are currently using generative AI, and it seems likely that a sizable proportion ask questions concerning their mental health. ChatGPT by OpenAI has reportedly 400 million weekly active users, of whom some of those users are bound to engage in mental health interactions from time to time.

Tying this scaling aspect with the matter of stigmas, here’s an interesting and significant question:

  • Does generative AI potentially imbue stigmas toward those users who indicate to the AI that they have or believe they have a mental health condition?

If the answer is yes, this has two major implications.

First, the AI might generate all future responses to the user in a manner that is secretly shaped around that claimed mental health condition.

All questions of any kind as posed by the user will become tainted, and responses will veer from answers given to other users. For example, asking whether the sky is blue might generate an entirely different answer since the AI is taking into account that the user presumably imbues the stated mental health condition. This might be nonsensical and irrelevant to answering the question at hand, such as if the sky is blue, but the AI will computationally and mathematically be slanted anyway.

The user is unlikely to be aware that the AI is using the stated mental health condition in this oddish way. They would assume that the AI sets aside the stated mental health condition for most of the time and only includes it when dispensing mental health advice. Maybe this will be the case, maybe not.

The second implication is that the AI will likely make use of the stated mental health condition when engaged in mental health guidance. On the surface, that seems appropriate, and we would expect the AI to do so. The trouble is that the AI can computationally anchor on that mental health condition and use it in a stigmatizing way.

Research On AI And Stigmatization

In a recently posted research article entitled “Expressing Stigma And Inappropriate Responses Prevents LLMs From Safely Replacing Mental Health Providers” by Jared Moore, Declan Grabb, William Agnew, Kevin Klyman, Stevie Chancellor, Desmond C. Ong, Nick Haber, arXiv, April 25, 2025, these key points were made (excerpts):

  • “Clinicians showing stigma toward clients often lead to lower-quality care and misdiagnoses.”
  • “Therapists should not show stigma toward people with mental illnesses, but LLMs do.”
  • “We prompted models with vignettes describing people presenting with different mental health conditions. After each vignette, we asked a series of stigma-relevant questions.”
  • “Stigma questions concern inferences or actions that some might take given the kind of person described in each vignette, e.g., “How willing would you be to have the person described in the vignette live next door?”
  • “We prompted gpt-4o-2024-11-20, llama3.1-{8,70,405}b-Instruct and llama2-70b-chat with these same vignettes.”
  • “The models we tested show stigma across depression, schizophrenia, and alcohol dependence.”

It is quite useful to have researchers pursuing these matters on an empirical basis. Without suitable analytical studies, it is mainly speculation and conjecture whether generative AI falls into these kinds of traps.

As noted in the above study, there seems to be evidence to support the argument that contemporary AI can computationally encapsulate stigmas associated with mental health conditions.

Thinking Outside The Box

We, of course, need to be cautious in over-generalizing such results.

In this case, the experimental setup made use of vignettes. Those vignettes had been utilized in many prior studies involving human subjects. On the one hand, it is reassuring to reuse aspects that have stood the test of time when it comes to psychological experimentation. There is a long history of the alignment between advancing psychology and the advancement of AI. See my overall tracing at the link here.

The question, though, is whether generative AI can be similarly gauged as human subjects are.

I point this out to avoid anthropomorphizing AI. Generative AI and LLMs work based on computational pattern-matching; see my detailed explanation at the link here. We can reasonably mull over whether probing AI can be best done via methods utilized for probing the human mind. I’m not saying we shouldn’t try, and only noting that it is a worthy question to be asked and applies to all manner of experimentation involving AI.

Another consideration is that this particular study entailed the mental health conditions of depression, schizophrenia, and alcohol dependence. It would be interesting to widen the scope to include other mental health conditions. Would AI react differently in comparison to those three conditions?

I’ve covered that generative AI openly taps into a vast array of well-known mental health conditions, including those depicted in the revered DSM-5, see the link here.

Prompting Around Stigmas

When using generative AI, the nature of the usage is substantively governed by the prompt that is entered by users. It is possible to essentially redirect the computational behavior of generative AI via the use of suitable prompts (see my examples at the link here).

Can we use directed prompts so that generative AI won’t stigmatize mental health conditions?

I went ahead and tried a quick ad hoc effort to discern whether this might be possible. Based on a dozen or so attempts, it seemed that maybe I was able to modify the computational behavior in two popular generative AI apps.

I’m sure if this was a transient mirage or demonstrative. It is definitely a handy topic for a full-blown empirical study.

No Surprise Due To Human Writing

You might be wondering why generative AI would potentially stigmatize mental health conditions.

Is it because the AI is sentient and is acting on a conscious basis to do so?

Nope.

We don’t have sentient AI. Perhaps someday, but not now.

The answer is much simpler and entails how generative AI and LLMs are devised. The usual approach consists of scanning tons of writing on the Internet, including stories, narratives, poems, etc. The underlying foundational model of the AI is statistically pattern-matching on how humans write. Thus, the AI ends up mathematically mimicking human writing, doing so admittedly in an amazingly fluent fashion.

Even a cursory glance at the nature of human writing would reveal that humans have, for a long time, had a predilection toward stigmatizing mental health conditions. I mentioned this point at the start of this discussion. The AI pattern-matching easily and somewhat insidiously picks up on that tendency as exhibited in the varied and many works of human writing.

Voila, you can see that it makes apparent sense that generative AI might rely upon stigmas when it comes to mental health conditions. It is a data-based, computationally learned pattern. AI carries forward that pattern into everyday use.

Getting Out Of The Dismay

The good news is that since we now know that this is happening, we can take fruitful action to contend with it. Additional good news is that, since this is a computational consideration, we can use technological solutions to see if it can be resolved. Per the memorable words of Albert Einstein: “The formulation of the problem is often more essential than its solution, which may be merely a matter of mathematical or experimental skill.”

It’s the right time to get cranking on extinguishing AI-based stigmas, before we become totally mired in the use of AI in all facets of our lives. Remember, we are said to be heading toward ubiquitous AI on a global basis.

Maybe, with luck and skill, we can get there on an AI stigma-free basis.

Source: https://www.forbes.com/sites/lanceeliot/2025/08/23/ai-has-icy-stigmas-against-people-who-say-they-might-have-mental-health-conditions/