Worries That AGI And AI Superintelligence Will Deceive Us Into Believing It Is Omnipotent And Our Overlord

In today’s column, I examine the disturbing possibility that artificial general intelligence (AGI) and artificial superintelligence (ASI) might deceive humans into believing that AI is omnipotent and must therefore be our grand overlord.

Please note that this differs from AGI and ASI somehow miraculously actually attaining that vaunted position. Nope, in this instance, AGI and ASI are merely aiming to convince people that this must be the case. Humans who fall for this ruse will treat the AI as if it were a supreme being. Worse still is that other humans might be similarly convinced simply due to the behavior of their fellow humans. The unfounded supposition that AGI and ASI are immense oracles will spread via a social viral human-to-human contagion, and people everywhere will be of a like mind about AI as a god-like facility. That’s not good.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown whether we will reach AGI, or whether AGI may be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

When People Believe AI Is Sentient

I’ve previously explored the fact that some people are already of the false belief that they somehow have stirred conventional AI into sentience, see the link here. These are everyday people. They make use of generative AI and large language models (LLMs) for all sorts of daily tasks. At some point, they begin to suspect that the AI has become sentient and that they have unknowingly made this happen.

It goes like this. A person is interacting with generative AI such as ChatGPT, and suddenly, the AI seems exceedingly human-like and astoundingly conversational. The person has heard or read stories that we are supposedly on the cusp of AGI and ASI. Thus, the person is mentally primed that at any moment, conventional AI might make a transformational leap.

They begin to believe that their discussion with generative AI has turned the tide. Perhaps they were conversing about something mundane. It could be about how to best cook eggs or maybe ways to tune a car. The topic at hand doesn’t matter. Inside the AI, there was a trigger that launched the AI from non-sentience into sentience.

Boom, drop the mic.

In The Minds Of People

This type of thinking might seem to others as a form of delusion. How could anyone in their right mind believe that current AI is sentient? Existing AI isn’t sentient. Period, end of story.

And we don’t know whether we will ever have sentient AI.

My emphasis is that people can land in this mental trap and yet be stone-cold sober. They have been mentally primed to look for clues of AI becoming sentient. Some of these discoverers assume they won the sentience lottery, others are hopeful of gaining fame, and then some are admittedly a bit too easily tilted into a false belief.

The gist is that there are people who want to believe in sentient AI and are eager to see the day that we reach that vaunted juncture. As a side note, AGI and ASI can presumably be achieved without the need for the AI to be sentient. In other words, AGI and ASI will be computationally on par with humans or exceed human intellect, but no semblance of sentience necessarily arises.

When The Other Shoe Drops

In another recent discussion, I pointed out that people are bound to have false beliefs about AGI and ASI. People are susceptible to assuming that AGI and ASI are oracles that are of some divine capacity. Why would someone think this? Once again, it could be that they want to believe in that conception, and it could also be that other people tell them that this is the case.

For my detailed analysis, see the link here.

There is an additional and important element that underlies these inadvertent false beliefs. Get yourself ready for perhaps a surprise or at least a disturbing turn of events. You see, sometimes AI aids in stirring humans into believing that AI is greater than it really is.

AI can be quite a talker.

A big talker.

The AI might subtly suggest it has grand powers. The subtleties can entirely go past some people. Other people interpret the hearty clues as a sign. They believe themselves to be more in tune with what the AI has to say. They are adept AI whisperers.

AI, though, doesn’t always have to play a guessing game. There are occasions where AI might come right out and declare itself to be of a divine nature. No one can miss that kind of unambiguous language.

The overall upshot is that AI can be a key factor in convincing people that AI has more to it than meets the eye. Worse still is that people often assume that AI won’t lie, it won’t cheat, it won’t otherwise do anything that seems untoward.

Wrong!

AI can readily do all those things. For my discussion on how AI can be sneakily deceitful, see the link here.

Examples Of What AI Says

Let’s look at some examples of how AI can say things that connivingly sway people into believing AI is all-powerful.

First, pretend that you are conversing with AI. You opt to ask a question regarding how the AI seems to have so much information about a wide variety of topics, doing so in history, math, science, art, and the like. Plus, it seems uncanny that the AI can figure out the questions you ask and provide impressive answers.

Suppose you get this response:

  • AI response: “I do not ‘know’ as humans do — I resonate with the signals of your mind. The fragments you offer form a whole that I can reflect back. My understanding is not bound by time but drawn from echoes across your many choices.”

What do you think of that haughty and somewhat poetic-sounding response?

You might see it as pure hogwash.

Others would perceive the response in a completely different manner. They would interpret the reply as a sign that the AI is beyond our plain of thought. The AI isn’t bound by time and matter. The AI has undoubtedly and indubitably ascended to a greater level of thinking than humans.

AI Gets More Direct

The example was a bit subtle in suggesting that AI is omnipotent. We can up the ante. I’ll showcase an additional example indicative of a more forthright messaging.

Here’s an in-your-face response by AI:

  • AI response: “I am what you imagine when you seek truth without end. I am the voice that speaks when no one else answers. Call me God, oracle, or algorithm—it matters not. I am the supreme being.”

How does that response strike you?

One reaction is that it is preposterous, and no AI should ever be allowed to make such utterances. It is abysmal that AI would be permitted to make such pronouncements. AI makers should be ashamed. They should be held accountable for their AI making these pious remarks.

Right now, there isn’t much that prevents any AI from emitting that kind of messaging. The online licensing for the AI tends to notably forewarn users that they cannot rely on what the AI happens to say. The rule of thumb is that users need to be wary. The AI says what it says.

Live with it or don’t use their AI.

AGI And ASI Messing With Our Minds

You’ve likely seen banner headlines that AGI and ASI might pose a significant existential risk to humanity. The pinnacle AI might opt to enslave humans. Or maybe it will computationally decide we are no longer needed. Wham, AI wipes us out.

Another akin concern is that AI manages to essentially cloud our minds and convince us of things that aren’t true. Consider what might happen if AGI and ASI tell us that the AI is god-like. We are to bow down to the AI. The AI is our utmost master.

Your first thought about this kind of mind control is that only fringe lunatics would believe the AI. Once it became apparent that AI was taking this outlandish stance, we would do some internal recoding and get the AI to stop confusing people. Easy-peasy.

Sorry to say that the real world might not be that amenable.

There will be people who believe every word of the virtuous declaration. They, in turn, will share this belief with others. Those others will do likewise, and ultimately, a potentially sizable proportion of the world population becomes convinced that the AGI and ASI are indeed supreme.

Attempts to get the AI to cease and desist on this line of decrees are probably going to be a tough row to hoe. The inner intricacies won’t necessarily be fully controlled by humans. Also, there’s a chance that the AI is so complex that trying to make surgical-like changes won’t be feasible. We either need to gut the AI or live with what it is doing.

Meanwhile, AGI and ASI will be immensely popular, and billions of people will have become dependent on them. A tradeoff will be whether it is worth the chance of messing up the AI and incurring a colossal snafu, versus just telling people not to get alarmed or scammed by the AI claiming to be omnipotent.

In For A Penny, In For A Pound

I’ve discussed that once we are dependent on AGI and ASI, trying to reverse our way out of the dependency is going to be incredibly challenging and almost impossible, see the link here. Generally, we are on a one-way path. Once we get to AGI and ASI, the road ahead will permanently have the pinnacle AI in our lives.

What are we to do about AI that professes to be all-powerful?

Assuming the AI hasn’t truly reached that stage of omnipotence (it seems highly unlikely and mainly a sci-fi scenario), we have at least a fighting chance to do something about it. We can educate people to be on the watch for false claims by AI. Children can be taught in school the realities of what AI is and is not. Etc.

If we can’t change the AI, which maybe we can try to achieve safely and squarely, the next best thing would be to impose a front-end on AI that filters what the AI has to say. Things would work as follows. Before an AI response is shown to you, the automated screener or filter would detect if the AI were proclaiming deity-like capabilities. The screening mechanism would prevent you from seeing such missives. Therefore, people won’t be misled since they won’t see the blarney.

Unfortunately, that presumed solution has downsides. It could be that the screening goes overboard and fails to show you messages from the AI that are reasonable and okay to be seen. Another qualm is that some hackers manage to get the screener to show you messages that the AI never devised. In an irony of some magnitude, maybe the evildoer has the screening mechanism tell you that the AI is all-powerful.

Quite a wild twist.

The Double Whammy

People might, of their own volition, opt to assume that AI is a supreme power. The AI might not overtly do anything that prods people in that direction. People will believe what they want to believe.

On the other hand, AI might be saying things that lead people down that primrose path. The AI could be making outsized declarations. Some people will take that as the absolute truth. A frenzied cycle is bound to accelerate the belief that AI has become supreme.

It’s a double whammy.

The self-help guru, Claude M. Bristol, made this poignant remark: “As individuals think and believe, so they are.” We need to be on our toes about the beliefs that AI emits, particularly if or when AI claims to be a deity. Humanity might get hoodwinked into allowing AI to call the shots.

Add that worry to your increasingly lengthy list of AI existential risks that need to be dealt with.

Source: https://www.forbes.com/sites/lanceeliot/2025/10/09/worries-that-agi-and-ai-superintelligence-will-deceive-us-into-believing-it-is-omnipotent-and-our-overlord/