Defining The Ill-Defined Meaning Of Elusive AGI Via The Helpful Assistance Of AI Itself

In today’s column, I tackle a quite vexing problem facing the pursuit of artificial general intelligence (AGI). Simply stated, there isn’t a universal standardized definition of what AGI actually means.

This is quite unfortunate. The difficulty is that when you read that an AI maker claims they are progressing toward AGI, you have almost no basis to judge what it is that they are supposedly progressing to. Even if they have their own proprietary definition of AGI, it is often sneakily rigged to fit whatever direction they perchance have chosen to go. For more details on how AI makers are “moving the cheese” when it comes to aiming their AGI attainments, see my discussion at the link here.

So, I will herein grab the bull by the horns and proceed to directly derive a strawman-proposed AGI definition. Interestingly, perhaps surprisingly, I will enlist generative AI to help me do so.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

Defining Allusive AGI

It seems that just about everyone in the AI community has opted to come up with a preferred definition of AGI. Well, admittedly it isn’t everyone, but much of the time AI researchers find themselves in the awkward posture of having to define AGI from scratch when speculating about what AGI is going to be. This makes abundant sense that a researcher would need to ensure that the reader understands what the working definition of AGI is for that particular study or experiment.

Of course, all this variability in AGI definitions means that you cannot readily compare apples to apples. One study will define AGI in one specific way, while a different study will define AGI differently. You then cannot easily perform a head-to-head comparison per se due to the apples-to-oranges differences. Sad face.

There is a gaping hole right now in any and all research studies and predictions about AGI since there isn’t a universally agreed across-the-board definition of AGI. The AI community desperately needs to concoct one and get universal agreement on it. That seems easy-peasy, but the reality is that we still do not have a universal standard definition of conventional AI, thus it isn’t shocking that we don’t have one for the more recently coined AGI (see the link here for my explanation of how the AGI moniker came into existence).

A Tasty Sampler Of AGI Definitions

To give you a quick semblance of the variability of AGI definitions, I provide three representative examples that I excerpted from relatively recent AI research papers that focused on AGI:

  • (i) Example of an AGI Definition: “A highly autonomous system, not designed to carry out specific tasks, but able to learn to perform as broad a range of tasks as a human (modulo biological differences) at least at the same level of ability as the average human” (by Federico Faroldi in “Risk and Artificial General Intelligence”, AI & Society, July 9, 2024).
  • (ii) Another example of an AGI Definition: “AGI is a computer that is capable of solving human solvable problems, but not necessarily in human-like ways” (by Morris et al in “Levels of AGI: Operationalizing Progress on the Path to AGI”, arXiv, November 4, 2023).
  • (iii) An additional example of an AGI Definition: “We use AGI to refer to systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level” (by Bubeck et al in “Sparks of Artificial General Intelligence: Early Experiments with GPT-4”, arXiv, March 22, 2023).

Closely inspect those three examples.

Do you think they are saying the same thing?

It sure doesn’t seem that way. Though they are all dancing around the overall notion of AGI, each of the three has a particular twist or spin. What we need is something crafted that is equally succinct so that it is readily comprehended and shared, while also including a sufficient semblance of the nuts and bolts to nail down what AGI consists of.

One useful universal AGI definition.

Maybe, just maybe, we could then rally around that golden text and finally all be saying the same thing when it comes to talking about AGI. Happy face.

An Informal Experiment About AGI

To take a shot at this challenge, I opted to collect twenty AGI definitions that were mentioned in AI research papers in the last 3 years. I could have used the year 2025 only, but I think it is more balanced to use AGI definitions arising in recent times and not just this year alone. This especially seems logical because I picked some modern classics that have appeared in highly notable AI research papers that were published since 2023.

How would you synthesize the twenty AGI definitions?

Remember that the goal here is to land on a finalized version of a single definition of AGI. You need to take the twenty and somehow combine them, confabulate them, or otherwise comport them into one succinct definition. The easiest route would be to just pick one of the twenty. I’m not going to take that pathway. Let’s see if we can use them as the wisdom of the crowd and transform them into something stellar. Perhaps synergy will get us to something magical.

I realize that some critics might have heartburn that I didn’t collect more than twenty AGI definitions. Yes, I could have found fifty, probably even one hundred or more. Since this is an informal experiment and not bound by the ironclad rules you might normally expect, twenty seemed a fair number. It was more than five or ten, or a simple dozen. That deserves some credit right there.

AGI Definition As Aided By AI

Turns out there is a really handy text-based content masher that already exists and is eager to be used for a task such as this. I am referring to contemporary generative AI and large language models (LLMs). As you undoubtedly know, LLMs are especially dandy at taking in text and spitting out text based on the text that you provided.

If you were to directly ask your favorite generative AI for a definition of AGI, you would instantly get an AGI definition. Boom, your task is completed, congratulations. The generative AI was data trained by scanning text across the Internet and certainly encountered many definitions of AGI. In that sense, just asking an LLM for a definition of AGI gets you a kind of mashed version of an AGI definition.

I decided to make a kind of contest instead of merely asking AI to tell me an AGI definition.

My informal experimental design was as follows. I would give the twenty AGI definitions to a group of selected LLMs. I would ask each LLM to produce a new AGI definition based on the twenty. The new AGI definition was to be displayed in two versions, consisting of a short version and a long version.

Here is the prompt that I used:

  • My entered prompt: “AGI is currently ill-defined. I am going to give you twenty definitions of AGI. You are to carefully read the definitions and then come up with a new definition for AGI based on the provided definitions. Do not favor one of them more than any of the others. Also, you can diverge from the definitions, just so long as you make sure to give each of the definitions diligent attention in your deriving of a new definition for AGI. You are to produce an AGI definition that is relatively brief and consists of one to two sentences. After doing so, produce a more elaborate AGI definition that is a paragraph in size. Do you understand these instructions?”

I’m sure that some trolls might quibble with the wording of the prompt. I opted to tilt things slightly in the sense that I didn’t want the AI to ignore the provided definitions. This could happen. The AI might just skip the definitions and showcase its own AGI definition. That’s not what I wanted to have occurred, so I tried to clarify the nature of the task at hand.

I picked four popular generative AI LLMs, consisting of ChatGPT, GPT4-o, Llama 3.3, and Claude 3. Those seem like a reasonable small set to utilize in this instance. Could this be expanded to use more LLMs? Of course. The aim here was to try things out and see if anything stimulating would arise.

Round One Of AGI Definitions

I went ahead with my experiment. For each of the four LLMs, I entered my prepared prompt and then provided the twenty referenced AGI definitions. They each responded with two definitions of AGI, a short version and a long version, as per my request.

This gave me four AGI definitions that were short in length, and four additional AGI definitions that were lengthier in size. I’m not going to show them all here since it would chew up a lot of precious space.

As an example, here is what Claude had to say:

  • Claude generated (short version): “AGI is an artificial intelligence system that can learn and perform a broad range of tasks at or above human-level proficiency, with the ability to adapt and generalize its capabilities to novel situations.”
  • Claude generated (long version): “AGI refers to an advanced AI system that can demonstrate a wide spectrum of cognitive abilities, including reasoning, planning, learning, and problem-solving, at a level that is comparable to or exceeds that of the average human. Unlike narrow AI systems designed for specific tasks, AGI is envisioned to be a highly autonomous system that can adapt and apply its intelligence to a diverse range of real-world or virtual environments, tackling complex problems and challenges in a flexible and generalized manner. The key characteristics of AGI are its breadth of capabilities, the ability to learn and improve through experience, and the potential to surpass human-level performance across a variety of intellectual and practical domains.”

Take a gander at those two AI-generated AGI definitions and mull them over, possibly while sipping from a glass of fine wine.

Concentrating On The AGI Short Versions

For a second round, I opted to feed the four short versions into each of the four LLMs. I asked the LLMs to produce a short version, based on the provided short versions. I then entered the four long versions and asked the four LLMs to produce short versions of those short versions. My focus was on deriving a short version. In a future continuation of this experiment, I’ll do the same to derive the long versions.

I now had eight short versions in hand.

As a third and final round, I asked each of the four LLMs to rank the provided eight short versions. They were to rank as #1 whichever of the eight they thought was the best, then indicate #2, #3, and so on until the eighth one was listed. I hoped the LLMs would grasp this ranking exercise, and to my pleasant surprise, they all did so readily.

Here is the prompt that I used:

  • My entered prompt for the third round: “You are to carefully read the eight definitions of AGI that I will be giving you. Rank them in order from the AGI definition that you think is the best to the ones that are increasingly least best. You can just show the number of the definitions when displaying the ranking. For example, if you believe that definition 7 is the best, you will indicate that the #1 of the AGI definitions is number 7, and so on. Do you understand these instructions?”

That seemed to do the trick and each LLM did a bang-up job.

Ranking Of The AGI Short Definitions

Drum roll, please.

ChatGPT won the ranking as undertaken by the four generative AI apps of ChatGPT, GPT4-o, Llama 3.3, and Claude 3, for which they unanimously picked this AGI definition as the #1 of the eight provided:

  • “AGI refers to an autonomous system capable of understanding, learning, and applying knowledge across a broad range of tasks and environments, including unfamiliar ones, with adaptability and generalization comparable to or exceeding that of humans. It demonstrates flexible cognitive abilities such as reasoning, planning, and problem-solving beyond narrow or pre-specified domains.” (per the ChatGPT short version as based on all 8 of the generated short ones).

Their basis for ranking the above AGI definition as the #1 of the eight was typically expressed this way: “Most comprehensive, emphasizing understanding, learning, application, cross-domain generalization, unfamiliar environments, and flexible cognitive abilities.”

ChatGPT also took second place, though this wasn’t unanimous (Claude placed it third, the others placed it second)

  • “AGI is an autonomous system capable of learning, reasoning, and adapting to perform a broad range of intellectually demanding tasks across diverse domains and novel situations, at or above human-level proficiency. It generalizes knowledge across contexts, functioning independently of specific implementations.” (ChatGPT short version as based on all 8 short ones).

Their basis for ranking the above AGI definition as the #2 of the eight was typically expressed this way: “Very strong; clearly highlights generalization, novel situations, and independence from specific implementations”

Assessing The Rankings And The Final Choice

Personally, I agree with their assessment and after mindfully examining all eight of the generated AGI short definitions, I too came to the same conclusion that #1 of the provided set of eight is the above-noted AGI definition.

In case you were wondering, I don’t believe that I was swayed by their choices. I would happily disagree with the LLMs and almost wanted to do so. Nope, in this case, the winner-winner chicken dinner went to the right one.

It seems eyebrow-raising that four different LLMs picked the same AGI definition as their #1 pick.

I was expecting that the rankings would be all over the map. One obvious assumption is that indeed the one they chose was the best. Other suspicions are that somehow things went askew. For example, by luck of the draw, I had placed the ChatGPT winning definition as the first of the eight. You might suspect that the four LLMs lazily picked the first one. I don’t think so. Each of the LLMs gave impressive reasons for why they selected that particular AGI definition. Were they trying to cover for their actual laziness? It seems doubtful that all four would do so.

Another consideration comes to mind. In a previous analysis that I made, see the link here, I pointed out that by and large the major LLMs are being data trained on the same data. They are scanning the Internet in roughly similar ways. Their base architecture is roughly the same. AI research has already noted that the major LLMs tend to have a lot in common and are likely to produce similar results.

We already would guess that ChatGPT and GPT-4o would respond potentially similarly since they are both products of OpenAI. That being said, I have often gotten quite different answers from ChatGPT versus GPT-4o. They are not identical twins.

Anyway, it was hopefully interesting to you and was an eye-opening casual experiment. Do we now have in hand a drafted universal definition of AGI? Time will tell.

This whole kit-and-kaboodle brings up a related tangent. If an AI researcher is aiming to discuss and study AGI, how are they to concoct an AGI definition? One method would be to ask an LLM for an AGI definition. Or consider doing something akin to this approach, asking several LLMs and trying to reach a consensus on which AGI definition seems best.

A final thought for now.

In a head-scratching macroscopic philosophical viewpoint, ponder what it means to say that one AGI definition is better than another. The conundrum is this. If we don’t have an AGI definition that is a universal standard, on what basis can you claim that there is a “best” AGI definition? It is the proverbial cart in front of the horse dilemma.

Well, the good news is that the LLMs breezed through that philosopher’s problem and provided their respective answers. Score a point for generative AI that takes a down-to-earth practical stance and simply gets the job done.

Source: https://www.forbes.com/sites/lanceeliot/2025/05/07/defining-the-ill-defined-meaning-of-elusive-agi-via-the-helpful-assistance-of-ai-itself/