Watch Out For Media Rage-Baiting About The Topic Of AI For Mental Health

In today’s column, I examine the ongoing efforts of the media to “rage bait” about the topic of AI for mental health.

What is rage bait?

First, Oxford University Press has anointed “rage bait” as the 2025 Word of the Year. It is slang that is increasingly popular. In case you aren’t familiar with this powerful catchphrase, it signifies a circumstance when online content is intentionally devised to elicit anger and rage. This is a concerted effort to get people to not only click on an article or posting (referred to as clickbait), but to go further and provoke people into emotional wrath. The aim is to hijack your emotions and goad you into responding.

It turns out that rage-baiting has also been used to stir up raw emotions concerning the advent of AI that provides mental health insights. This encompasses both conventional generative AI and large language models (LLMs) and specialized LLMs that are purpose-built for mental health guidance.

I’d like to take a moment and call out the media rage-baiting and set the record straight.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice.

Banner headlines in August of this year accompanied a lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm.

For the details of the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Rage Baiting Is All Around Us

Let’s focus on rage-baiting associated with the use of AI for mental health.

Here are a few illustrative indications of what I consider to be rage bait in this realm:

  • “AI Shrinks Are Hurting Our Kids”
  • “The Therapy Bot That Told Someone to Do Evil Acts”
  • “Big Tech Wants to Replace Your Therapist With a Machine”
  • “Inside the AI Industry That’s Selling Fake Empathy”

The insidious nature of these headlines is that they contain a kernel of truth.

That’s the best way to compose rage bait. If the bait is completely false and outrightly so, the odds are that people will be wise enough to pass it by. Banner headlines need to be just close enough to some kind of truth to truly hook a person.

Furthermore, rage bait must stir emotional responses. The example of “AI Shrinks Are Hurting Our Kids” is cleverly worded to invoke your natural instinct to protect children. Anything that is harmful to kids is going to get your dander up. Each illustrative instance does the same thing.

All in all, rage bait must contain two crucial elements to be effective. It must have the right kind of bait. And it must be rage-inducing. A rage bait that doesn’t get rage going is only going to be on par with clickbait. The step upward adds rage. A headline that sparks rage but doesn’t have the right bait won’t get people to look at the posting.

Rage bait is a double whammy.

The Kernel Of Truth

I’d like to dive into the headline that said, “AI Shrinks Are Hurting Our Kids.”

Is the headline true or false? Well, that’s not quite so easy an answer because it is both true and false at the same time. Allow me to elaborate.

There is no doubt that AI for mental health can be harmful to non-adults. I recently analyzed a survey that showcased some of the harms associated with minors who have been relying on AI for their mental health guidance (see the link here). Thus, yes, there is a possibility that AI could hurt kids.

In that same survey, the researchers pointed out that 93% of the teens reported that they found the AI advice about mental health to be helpful. Yes, you got that right. The non-adults who were using AI for this purpose had expressed satisfaction to an amazingly high degree.

How could that be? It abundantly makes sense since they can use the AI without having to make a big deal out of it to their parents, they can use the AI anywhere and anytime, and by-and-large for simple matters, the generic LLMs such as ChatGPT, GPT-5, Claude, Llama, and Grok will give reasonably sensible answers about mental health.

I want to emphasize that the survey has both the upside and downsides of non-adults using AI for mental health. If you want to only read the downsides and seek to pretend that the upsides do not exist, the headline about harming kids is perhaps on target. It is a half-truth.

You could just as easily have worded the headline this way: “AI Shrinks Are Saving Our Kids”. It is again a half-truth. The more balanced wording would be that “AI Shrinks Are Helping And At Times Hurting Our Kids”. But that isn’t much of a rage bait.

The way to turn it into rage bait requires unbalancing the line and leaning into the hurtful properties of AI.

The Four Tricks To Devise Rage Bait

Now that we’ve seen how a rage-bait instance can be composed, I will turn our attention to the four key rage-bait framings that the media uses in the realm of AI for mental health. I hope that this arms you for dealing with the tsunami of rage bait that keeps piling up in the media.

Here are the four rage-baiting techniques:

  • (1) The harm shockers
  • (2) The replacement nightmare
  • (3) The villainous malevolence
  • (4) The AI ignorance factor

Those four techniques work especially well in the AI and mental health context.

Here’s why.

When people read or hear about AI, they often immediately have a visceral reaction that is conditioned on years of sci-fi stories and TV/films. AI invokes imagery of the future. Maybe AI is going to enslave humanity. Perhaps AI is going to completely exterminate humans. There has been plenty of chatter about the existential risk of AI and the so-called probability of doom, phrased as p(doom). For my coverage on the AI existential risk conundrum, see the link here.

In addition to an instant reaction to AI per se, any form of expression about mental health is going to equally garner keen interest. We are all worried about mental health. Society seems to be getting worse when it comes to mental well-being. Mental health is both a societal topic and a personal one. There is a lot of emotional tonnage associated with mental health as a topic.

Bam, combine AI and mental health to get yourself a potentially eye-catching eyebrow-raising mixture. Rage bait on the topic is almost as easy as falling off a log. People are primed and ready.

The Harm Shockers

A harm shocker is a headline or story that highlights a worst-case scenario.

Suppose that out of a thousand teens that are surveyed, one says that the AI told them they were mentally messed up. Is that worrisome? Sure. Does that one instance tell the whole story? Nope.

A rage bait approach doesn’t worry about the 999 that seemed not to have any issues. The aim will be to tout that the AI told a teen they are mentally messed up. This becomes the hook. You then, deep in your heart, want to find out why the AI did this. You want to find out what happened to the teen. It is an emotional roller coaster. You also tend to assume that if the AI did it once, the AI is probably doing so millions of times.

The gamble is that the title alone of the story would maximize your outrage and stimulate your disgust. That’s the beauty, as it were, of a harm shocker.

The Replacement Nightmare

A replacement nightmare tries to explicitly indicate or at least imply that AI is taking over, and humans are being set aside.

You might remember the headline that I earlier noted: “Big Tech Wants to Replace Your Therapist With a Machine,” and maybe you astutely observed that the implication consists of replacing therapists. Is it true that AI makers are being driven by the heady desire to get rid of human therapists? Though maybe some AI developers have that in mind, I would wager that the overarching interest is to provide mental health support at scale. The hope is that AI can democratize mental health care.

I’m not saying that this wouldn’t potentially undermine the hiring of human therapists. On the other hand, for the foreseeable future, I’ve been predicting that AI is going to bolster the need for human therapists.

How so?

The idea is straightforward. More people will be tapping into mental health via their use of ubiquitous AI. Those people will find that AI isn’t going to fully meet their needs. They will, ergo, seek out a human therapist. AI becomes a feeder system. People get a taste of mental health guidance and are more open to getting even more of it. This is going to be a boon for human therapists, as I lay out at the link here.

The Villainous Malevolence

Villainous malevolence is a push-button way to get people to react to what seems to be faceless, cold-hearted companies that want to crush people and treat them like dirt.

The previous example of naming Big Tech as wanting to replace human therapists with AI is a perfect showcase of this tactic. What do you think when you hear or see the phrase Big Tech?

For a lot of people, they perceive tech firms as seeking profit at any cost. Those AI makers don’t care about your mental health. They care about making a buck from your mental health. You are a means to an end. By getting you to use the AI, they can harvest your entered data and monetize it. On and on this goes.

Once again, this taps into your instinctive mores and gets you riled up.

The AI Ignorance Factor

The AI ignorance factor has to do with a pervasive lack of understanding about the capabilities of modern-era AI.

You wouldn’t be at fault for not being cognizant of what today’s AI can and cannot do. The media is awash with wild claims. AI can walk on water. AI leaps tall buildings with a single bound. Trying to discern tall tales versus real-world capabilities is a tough row to hoe.

People are already on the edge of their seats about what AI is going to do next. A rage bait attempts to dig into your psyche and frighten you that AI has finally turned the page. You knew that one day this would happen, and the headline or story opts to get a fire going inside you that AI is now beyond control.

A handy rage-baiting catch-all.

Do Not Be Misled By The Mass Media

The crux of handling rage bait is not to let it hook you and nor fry you.

Questions that ought to come to mind include:

  • Is the headline an apparent rage-baiting framing?
  • Does the source have a journalistic track record or is it fly-by-night?
  • Are there more than one reliable source saying the same thing?
  • Can I keep my rage from being triggered?
  • What kinds of sneaky words are being employed?
  • Etc.

A final thought for now.

Franklin D. Roosevelt famously made this remark: “The only thing we have to fear is fear itself.”

I’d like to add that another fear is that people will be mindlessly riled up by rage-baiting. They might then take action that is based on falsehoods and trickery. AI for mental health has tremendous upsides for society across the board. We definitely need to address the downsides and do what we can to mitigate or curtail those downsides.

Avoid getting trapped by rage bait and keep your mind balanced when it comes to the emerging realm of AI for mental health.

Source: https://www.forbes.com/sites/lanceeliot/2025/12/05/watch-out-for-media-rage-baiting-about-the-topic-of-ai-for-mental-health/