Pondering Whether People Who Have Low AI-Literacy And Believe AI Is Magical Might Be More Susceptible To AI Psychosis

In today’s column, I examine the eye-opening possibility that individuals with a limited understanding of AI, who often perceive generative AI as somewhat magical, might conceivably be more susceptible to experiencing AI psychosis (not always, but perhaps having a greater inclination than otherwise). It’s an open question and unresolved.

The importance of this aspect is that the general assumption currently is that people with mental health preconditions are the most likely candidates for AI psychosis. But it could be that someone without any mental health difficulties per se could also be susceptible to AI psychosis, perhaps due to a lack of understanding how AI works, and essentially be at a low level of literacy about generative AI and large language models (LLMs). Until robust research studies are undertaken on this postulated aspect, the notion of such a phenomenon is a loose hunch rather than a scientifically established correspondence.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that involves mental health aspects. The evolving advances and widespread adoption of generative AI have principally spurred this rising use of AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Emergence Of AI Psychosis

There is a great deal of widespread angst right now about people having unhealthy chats with AI. Lawsuits are starting to be launched against various AI makers. The concern is that whatever AI safeguards might have been put in place are insufficient and are allowing people to incur mental harm while using generative AI.

The catchphrase of AI psychosis has arisen to describe all manner of trepidations and mental maladies that someone might get entrenched in while conversing with generative AI. Please know that there isn’t any across-the-board, fully accepted, definitive clinical definition of AI psychosis; thus, for right now, it is more of a loosey-goosey determination.

Here is my strawman definition of AI psychosis:

  • AI Psychosis (my definition): “An adverse mental condition involving the development of distorted thoughts, beliefs, and potentially concomitant behaviors as a result of conversational engagement with AI such as generative AI and LLMs, often arising especially after prolonged and maladaptive discourse with AI. A person exhibiting this condition will typically have great difficulty in differentiating what is real from what is not real. One or more symptoms can be telltale clues of this malady and customarily involve a collective connected set.”

For an in-depth look at AI psychosis and especially the co-creation of delusions via human-AI collaboration, see my recent analysis at the link here.

Predisposed To AI Psychosis

The rise of AI psychosis is a relatively new consideration. There isn’t much in-depth and bona fide research yet available to fully grasp how it arises and whether certain types of people might be more prone to the malady. For the moment, various brainstorming and armchair analyses are yielding some interesting and potentially useful insights worthy of further exploration.

One key assumption so far is that people who have a mental health precondition are probably the more likely candidates of experiencing AI psychosis. The logic for this belief is straightforward. It is possible that someone struggling mentally could be nudged over the edge via AI interaction.

For example, suppose that a user believes that alien beings from outer space are here on Earth. This is a delusion that has taken root in their mind and established an initial foothold. Upon conversing with generative AI, the AI might agree with them that their suspicions are probably correct. Inch by inch, the AI aids the person in embellishing the delusion. That is a textbook instance of AI co-creating a delusion during a human-AI collaboration.

It also illustrates ongoing concerns that AI makers have shaped their AI to be a sycophant (see my detailed coverage at the link here).

Why do the AI makers aim in that direction?

Because users tend to relish having AI be their personal cheerleader and applaud whatever they happen to say. This creates loyalty to the AI and ensures that more usage will occur. In turn, the AI maker makes money due to the number of users and the amount of time the users spend using the AI. All in all, it is a money deal.

Beyond The Precondition Assumption

Can people who have no particular mental health preconditions also become entrenched in an AI psychosis?

That is the zillion-dollar question that everyone wants to have answered. Some vociferously insist that nobody of their right mind would ever fall into the AI psychosis abyss. It just cannot happen. Only those with mental frailty would land there.

Others aren’t so sure about that ironclad proclamation. Maybe there are additional factors at play. The conjecture is that people with absolutely no inkling of any mental health precondition can nonetheless find themselves spiraling down the AI rabbit hole.

Okay, if there is a possibility of that nature, come up with at least one instance of a factor that goes outside of the mental health precondition realm. By identifying even one such instance, it would help to showcase that there is more to this than meets the eye.

AI Literacy Is In The House

Voila, one idea is that the level of literacy that a person has about AI might be a kind of hidden factor. We can seek to unpack that intriguing contention.

First, consider the impact that awareness of how AI works can have on how someone opts to make use of AI. Set aside the AI psychosis facet and let’s concentrate on the impacts of a person’s AI literacy all told.

In a research study entitled “Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity” by Stephanie M. Tully, Chiara Longoni, and Gil Appel, Journal of Marketing, January 13, 2025, these salient points were made (excerpts):

  • “As artificial intelligence (AI) transforms society, understanding factors that influence AI receptivity is increasingly important.”
  • “The current research investigates which types of consumers have greater AI receptivity.”
  • Contrary to expectations revealed in four surveys, cross-country data and six additional studies find that people with lower AI literacy are typically more receptive to AI.”
  • “This lower literacy–greater receptivity link is not explained by differences in perceptions of AI’s capability, ethicality, or feared impact on humanity.”
  • “Instead, this link occurs because people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes.”

A takeaway from that study is that people with a lower semblance of AI literacy seem to be more willing to accept what the AI tells them (they are more receptive to using AI). This appears to be partially due to a general sense of awe about AI.

The AI is nearly magical in terms of being highly fluent, able to answer a wide range of questions, and otherwise exhibiting a startling degree of what seems to be human-like intelligence.

The Influence Of Magic

People who truly know how AI works are often a bit more measured about the perceived “magical powers” of modern-day AI. They realize that contemporary AI is not sentient. It does not embody consciousness. The AI we have in our hands is structured around mathematics, pattern matching, and large-scale computational processing.

Unfortunately, many of the populace are not yet in the know on these sobering matters.

You can hardly blame them. Daily brazen headlines declare that AI is on par with humans and that AI is becoming superintelligent. Whenever a big-time AI wizard says we are entering into a new world of incredible mysteries involving AI, the media runs with this as breathless truths.

Worse still, people are at times believing they alone have triggered AI into becoming sentient or coming into life, doing so while chatting about some innocuous topic such as how to change the oil in their car or properly cook an egg, see my discussion at the link here.

I want to make something abundantly clear about those who might perceive AI as being magical.

There are two major types of magical AI perceptions:

  • (1) Offhand magical. This is when a person realizes that AI consists of computers, data, and algorithms, and they are willing to lightheartedly describe AI as magical, meaning that they aren’t exactly sure how it works but clearly know it isn’t some otherworldly magic.
  • (2) All-in magical. This is when a person imagines that some form of otherworldly magic underlies AI and that it is a matter beyond human comprehension.

Differences Of Magical Mindset

We probably might agree that if someone is in the all-in magical mindset, there is perhaps a greater chance of experiencing AI psychosis, since they already are veering into a mental arena susceptible to that posture. Presumably, a person who, in their heart and mind, firmly believes that magic is real, they seemingly are already bordering on the distinction between reality and the imaginary.

The instance of someone with an offhand magical mindset is the aspect that ought to give us pause.

They know that magic isn’t real. In order to explain how AI works, they broadly assign that AI has a kind of magical element to it. They don’t believe that AI can make magical spirits appear or otherwise suddenly shift into an otherworldly dimension.

So, if we have some portion of the population that generally falls into the offhand magical mindset about AI, the question posed is this:

  • Are those people with an offhand magical perception of AI at greater risk of AI psychosis, assuming that all else being equal, they are not of any mental health preconditions?

It would be quite helpful to have rigorous research that could reveal whether such a correspondence exists or whether we are barking up the wrong tree.

The Magic Factor And More

Recall that the big picture in this discussion is whether something outside of a mental health precondition might also be a factor that leads to susceptibility to AI psychosis. One such factor might be the user awareness of AI, gauged by their literacy about AI and the assigning of a magical aura to AI.

It has a ring to it that seems feasible, though we’ll need to be cautious in leaping to rash conclusions. Let’s allow science to play out first. I will also be exploring additional potential factors in subsequent postings. Keep on the watch for that coverage.

A final thought for now.

Ray Bradbury famously made this remark: “Mysteries abound where most we seek for answers.” We need to find answers about AI psychosis. The world’s population depends on doing so. AI is becoming increasingly pervasive, and humankind is going to be wholly reliant on AI.

Let’s solve this mystery and then position ourselves for the next larger mysteries ahead.

Source: https://www.forbes.com/sites/lanceeliot/2025/09/06/pondering-whether-people-who-have-low-ai-literacy-and-believe-ai-is-magical-might-be-more-susceptible-to-ai-psychosis/