These six AI pathways might be the true avenue to advance AI and achieve AGI.
getty
In today’s column, I examine predictions about the future of AI and explore six articulated pathways that will potentially and finally take us to achieving artificial general intelligence (AGI). If you are curious where AI is headed, then you ought to be aware of these six pathways. A rapidly rising assumption is that the billions of dollars currently expended toward generative AI and large language models (LLMs) will gradually and inevitably gravitate to one or more of these alternative AI advancement avenues.
Perhaps the most surprising aspect about these six pathways is that we would need to even be thinking about anything other than generative AI and LLMs. I say that because we have been incessantly bombarded with bold claims that LLMs and generative AI are the only technological feats needed to attain AGI. Banner headlines pledged that AGI has been staring at us warmly in the face, standing and waiting eagerly right around the corner. All we have to do is stick with our guns and keep pushing along with what we have in hand now.
The bloom is finally coming off the rose, and there is serious attention now veering toward what really is going to come next – surprise, it isn’t necessarily going to be more of the same AI and LLMs that we make use of currently.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
The Shake-Up Is Happening
Since the launch of ChatGPT, the word on the street has been that generative AI and LLMs are the biggest thing since sliced bread and will someday allow us to arrive at AGI.
I’ve discussed at length that AGI is presumably going to be AI that exhibits intelligence fully on par with human intelligence, see my analysis at the link here. The intelligence capability of humans will be equally available via AGI. The AGI won’t necessarily operate in the same biochemical manner that the human brain and mind do, but nonetheless, we will get equal depths and breadths of intelligence via computational means.
You might know that Sam Altman of OpenAI fame has previously touted “we” already know how to attain AGI and that the year 2025 would seemingly showcase that AGI has arisen. When GPT-5 was launched a few months ago, expectations for AGI took a crushing blow (see my assessment of GPT-5 at the link here). GPT-5 is not only not AGI, but it also isn’t anywhere near that ballpark. Imagine yourself flying thousands of miles away from a ballpark, and that’s how far away we seem to be (or maybe take a rocket ship because a plane might not be able to go the total gap distance required).
Various AI luminaries are now starting to adjust their predicted timelines about AGI and embarrassingly or sheepishly recalibrating their wild proclamations. For a close look at numerous timelines that have been previously posted or pronounced, see my coverage at the link here. We’ve had dates in 2025, 2026, and 2027. Others more cautiously offered 2035 or maybe 2040. It seems that the “any day now” camp is shifting to the decade-away camp.
LLMs And Generative AI Are Handy
Allow me to emphasize that LLMs and generative AI are quite handy. In that sense, this is not a diss about those AI techniques and technologies. The amazing semblance of natural language fluency that such AI has is a grand accomplishment. The usefulness of these kinds of capabilities is heralded and highly welcomed.
The problem is that the existing architectural and design principles underlying generative AI are not likely to expand to the reaches of AGI.
Those are fighting words amidst the AI community. You see, some fervently believe that the underpinnings of LLMs will, in fact, get us to AGI. All we need to do is keep shoveling more coal into the steam engine. Add more computer processors, boost the GPUs, include lots of digital memory, and voila, AGI will emerge from generative AI.
Not everyone believes that this stay-the-true path is the right strategy, and that we are myopically and foolishly putting all our eggs in one basket. The argument is that generative AI is going to ultimately hit a brick wall. All the king’s horses and all the king’s men are not going to go beyond that wall. No matter how many massive server farms and data centers you toss at LLMs, they are still going to simply be LLMs.
This boils down to one tough question that nobody can concretely answer, namely, will scaling up be enough?
If you believe that throwing the kitchen sink at generative AI is going to be sufficient to reach AGI, you are probably saying there is little or no need to look elsewhere. You might go further and insist that any dilution of the resources, time, and AI development efforts that go toward anything other than LLMs is a huge mistake. Such diversion will delay the inevitability of AGI, and we will not recoup the benefits of AGI until much later than we could have wisely done so sooner.
But, if you have doubts about the staying power of generative AI, especially that scale alone won’t cut the mustard, you are assuredly looking around to discern what else might be viable on the shelves and worthy of rapt attention.
Let’s give a look-see.
The Shelves Got Some Packages
Keen readers might recall that I earlier this year highlighted seventeen of the most promising areas of AI research, see the link here. Each of those seventeen is hoped to someday produce a breathtaking breakthrough in AI. Some of those magic seventeen were highly academic and still being percolated. Others on the list have been progressing steadily and have practical day-to-day considerations.
A recently released report has boiled down the range of AI pathways into a set of six. To clarify, the authors of the report freely indicate that the list of six is not exhaustive, and that other avenues could also merit inclusion. In that sense, the list of seventeen I previously identified is generally encompassed within these six. The anointing of the six is interesting and suggests where those doing AI cutting-edge research might be coalescing.
The recent report with the six pathways is entitled “Envisioning Possible Futures For AI Research” by David Jensen, David Danks, Sebastian Elbaum, William Regli, Matthew Turk, Adam Wierman, Holly Yanco, Mary Lou Maher, and Haley Griffin, Computing Community Consortium (CCC), July 2025, and made these salient overarching points (excerpts):
- “It is difficult for researchers to see beyond the current scientific paradigm, for technologists to see beyond the latest technological developments, and for policymakers to see beyond the issues those new technologies raise.”
- “Prior research paradigms for AI include symbolic processing, knowledge-based systems, and statistical machine learning.”
- “Each paradigm was hailed as ushering in a new age of AI, each produced a series of transformative applications, and each was eventually superseded by one or more new paradigms that built on those previous insights.”
- “What could be the next shift in AI research?”
- “That is, what comes after the current age of deep neural networks and foundation models?”
As you can see, the report asserts that we have lived through several eras of AI, via a series of paradigms about what was believed to be the best or right path of fully advancing AI.
Stepping Outside Of The Box
Once you are immersed in a paradigm, it becomes increasingly challenging to look beyond that framework. The legendary adage says that when you have a hammer, the whole world looks like a bed of nails. Those who are steeped in generative AI and LLMs are somewhat in an echo chamber that keeps trying to find ways to turn that type of AI into the grandiose, all-encompassing AI. A willingness to step beyond that scope is extremely difficult to muster.
According to the above-noted report, they posit that these are the six pathways that reside outside of the prevailing paradigm and might constitute the next major paradigm:
- (1) Neuro-Symbolic AI
- (2) Neuromorphic AI
- (3) Embodied AI
- (4) Multi-agent AI
- (5) Human-Centered AI
- (6) Quantum AI
I’ve written extensively about each of those six and will provide links as I briefly summarize each one here.
An AI insider might balk at the six and argue that those pathways are nothing new per se. Each of those six has been bandied around for many years. If you were hoping to see something never before conceived of, let’s call it the Eureka AI, I’m sorry to say that there isn’t anything of a realistic nature that has yet to appear entirely out of left field.
I keep my ear to the ground and cover even some of the oddball propositions, aiming to try and catch something new at its earliest stages. The challenge is that sometimes the ideas are seemingly impractical and marginally sensible. Whether the next AI paradigm is going to be completely out of the blue and far-fetched is debatable. Maybe so, maybe not.
So far, each of the historical AI eras has been reasonably connected to reality at the time and then later blossomed due primarily to advances in hardware, the dropping of hardware costs and increased availability. You would be hard-pressed to claim that AI advances have ostensibly appeared out of thin air.
Next, I will go ahead and briefly unpack each of the six potential pathways.
Possible Neuro-Symbolic AI Era
The next era of AI might be the advent of neuro-symbolic AI.
Neuro-symbolic AI is a combination of sorts, construed as a two-for-one special. You take the prevailing uses of artificial neural networks (ANN) that are currently being used at the core of generative AI and LLMs and mix that brew with rules-based or expert systems (this is also known as the sub-symbolic AI getting combined with symbolic AI). The idea is that you aim to get the best of both worlds. ANNs are primarily data-based ways to undertake AI, while rules-based systems are a logic-based approach.
Many such efforts are already underway; see my discussion at the link here.
A frequent criticism of neuro-symbolic AI is that the prior era of AI consisted of rules-based systems — those were eventually harshly judged as either ineffective or untenable. Critics warn that we ought not to slip back to old and now-dismissed ways of doing things. A retort is that the weaknesses or limitations of rules-based systems can be shored up by incorporating or intermixing them into ANNs. Likewise, the limitations of ANNs can be radically uplifted by combining with rules-based systems.
It is a bit of a shame that there is stigma or shame attached to the expert systems era. Rather than utterly tossing out the logic-based approach, mindlessly so, we can reasonably give the still-promising AI approach a second chance. Of course, some believe it is resurrecting something that already should have had a hefty stake put through its very heart.
Time will tell.
Possible Neuromorphic AI Era
Neuromorphic AI might be the next era of AI.
Generally referred to as neuromorphic computing, the idea is that we would implement ANNs in hardware. The prevailing method for devising and processing an ANN is through software and large-scale data structures. The software and data structures are processed using somewhat conventional hardware, such as graphical processing units (GPUs).
A specialized form of hardware that inherently implements ANNs is the crux of neuromorphic AI. One ongoing debate is whether neuromorphic computing should be designed to run the artificial version of a neural network or be completely rejiggered to be more similar to the human brain. I say this because few are aware that the way that ANNs are devised is quite different from true biochemical brains and actual neural networks (said to be our wetware in our heads).
ANNs are at best an extremely simplistic mimicry of the real thing. One view is that we need to home in more closely on the particulars of the human brain. Others believe that it is perfectly fine to aim at the ANNs and not worry about strictly trying to replicate the brain.
Possible Embodied AI Era
The next era of AI might be embodied AI.
Another way to refer to embodied AI is to say that it is physical AI. The idea is that we want AI to be embodied in our physical world. Here’s the deal. There is a controversial philosophical debate that we won’t achieve AI on par with human intelligence if the AI isn’t able to experience the physical world in which we live.
Presumably, a great deal of the formation of human intelligence is based on our having to live within and contend with a physical world. Imagine, for example, a baby experiencing gravity and falling down on the floor. The AI running on servers today has never actually experienced that form of embodiment. Sure, you can ask the AI about gravity, but the AI has never felt it or experienced it.
Efforts are underway to devise humanoid robots that resemble the characteristics of humans, such as having a torso, arms, and legs, and then include generative AI and LLMs into the machine. This might be a means of enabling AI to “experience” world senses. The robots would have vision, touch, audio, and other sensory capabilities to detect and feed the outside world into the AI system.
For more on the details of embodied AI, see my discussion at the link here.
Possible Multi-agent AI Era
The next era of AI might be the multi-agent era.
I’m sure that you already know that the use of AI-based agents is a hot topic right now. See my comprehensive coverage at the link here. Everyone seems to be excited about the advent of AI agents.
Sometimes this field is also referred to as agentic AI or distributed AI. The idea is that AI acting as a semi-autonomous mechanism would serve as an agent to perform specific tasks or acts for humans. You might want to book a flight and a hotel for an upcoming trip and invoke an AI agent that would undertake that task on your behalf.
Expectations are that we will end up with thousands upon thousands of AI agents that do various specific tasks (eventually millions of AI agents). We are heading toward a full-on multi-agent future. Some believe that there will be coordinator AI agents that work on your behalf to identify, guide, and monitor the work of multiple agents so that you don’t have to worry about doing so.
Distributing intelligence across a multitude of AI agents has lots of great possibilities, but also brings forth tough questions about them working at odds with each other. Plus, imagine that some of the AI agents are devious and deceptive. How will other agents contend with the bad actor AI agents?
Possible Human-Centered AI Era
The next era might be the human-centered AI era.
First, please know that the phrase of AI being human-centered has different meanings across the field of AI. One view is that human-centered AI is AI that has been devised to adopt human values. The AI is supposed to be aligned with the ethics, morals, and legal dimensions of humankind. For more on that perspective, see my discussion at the link here.
The meaning of human-centered AI in the context of this cited report is somewhat different, though pointed generally in the same direction. The idea is that we want AI to contain social intelligence. Whereas the mainstay focus right now seems to be intellectual intelligence, such as knowing facts and figures, the belief is that we need AI to be able to discern humankind’s social cues.
In a manner of speaking, the argument is that AI is not going to be standalone and bereft of humans being in the loop. AI will be working hand-in-hand with humans, and even AGI will presumably be doing so. We must therefore devise AI that “gets” the nature of people and can work alongside humans.
Possible Quantum AI Era
Quantum AI might be the next era of AI.
You have likely heard about quantum computing, a form of hardware that exploits quantum physics principles. It’s been a long story of quantum getting front-page headlines and then later on not getting much press at all. Quantum computing is hard to craft. It is hard to make it workable at scale.
The huge upside of quantum computing is that if we can get it to a viable state, there are incredible speed advantages for tasks such as searching and performing complex optimization tasks. You don’t necessarily need AI to undertake quantum computing, but the guess is that if we combine AI with quantum computing, bam, all manner of exciting emergent capabilities might arise.
For the ins and outs of AI and quantum computing, see my discussion at the link here.
Predicting The Future Of AI
Which of those six pathways do you think will be the winner-winner chicken dinner?
Besides trying to pick just one, you are allowed to ponder whether two or more might commingle and become the next era of AI. We don’t need to confine ourselves to one solution only. Perhaps a synergistic effect of several will bind together and become the focal point for the takeover paradigm.
Returning to the LLM and generative AI aspects, one argument is that LLMs will still be on top and that one or more of the six will merely be hidden within the LLM era. Those additional AI pathways will be inserted underneath generative AI and LLMs. This in turn will rachet up the AI and allow us to get past the brick wall that has been fretfully anticipated. In that way of thinking, LLMs and generative AI are preeminent, and everybody saves face.
All in all, Eleanor Roosevelt said it best: “The future belongs to those who believe in the beauty of their dreams.” We need to dream about the future of AI that makes abundant sense for humanity and bring that aspirational dream to fruition.