More Humanlike Means Less Autonomous

The AI executives are at it again, promising human-level machines in the near future. In Davos, the CEOs of Google DeepMind and Anthropic each doubled down on the near-term arrival of artificial general intelligence – the hypothetical capacity for a machine to do most anything a human can – giving it 50% odds of arriving by 2030 and expecting it to arrive this year or next, respectively.

Is AI overpromised? Will the hype cost us dearly when the widespread narrative is recognized as overzealous and disillusionment sets in? Or is human-level machine intelligence around the corner?

As hard as I’ve argued against the AI hype, I have to admit that it’s a religious debate. We’re not approaching consensus. There will always be a contingent who believes even AI’s most grandiose promises.

Yet there’s a lot to be gained by clarifying what it is that we’re arguing about. After all, business leaders and investors need to understand exactly what they’re betting on as they struggle to pursue sound strategies rather than wishful thinking.

To Hype AI Is To Promise Extraordinary Machine Autonomy

The question of whether the AI hype overpromises is a question of goodness: Will AI soon become as good as promised?

But that opens a can of worms: How do we measure goodness? The most obvious answer is intelligence. The more intelligent, the better. Pursuing intelligence has won the day in the public’s eye. After all, the notion that’s making the world salivate is called artificial intelligence.

But “intelligence” does not represent a viable yardstick. It’s subjective. How could we know when it’s been achieved – or even when there’s been progress toward it? Any test designed to measure “intelligence” only diminishes it, because it only assesses a narrow capability.

On the other hand, the most grandiose AI goal is, ironically, easier to define – albeit still unmeasurable. Artificial general intelligence simply represents the whole enchilada. An AGI system would effectively be a “virtual human.” This notion is defined in terms of what humans can do, rather than in terms of a subjective quality that humans hold, intelligence.

AGI would mean supreme autonomy – by definition. Since it could do everything humans can, we would need no human in the loop. I have argued that AGI is not a feasible goal for the foreseeable future – that we are not even making concrete headway toward it. But even if you believed that research was making viable progress, there would be practical challenges to measuring it. How do you prove a machine can run a large company for years or fully educate a child without giving it a try on such tasks?

Instead, let’s get concrete and realistic: The suitable benchmark is autonomy. Rather than asking whether a system seems to exhibit “intelligence,” or whether it is headed toward wholesale human-level capabilities, ask how autonomous it is. How much work can it automate? Or, to what degree is it not autonomous, requiring humans remain in the loop?

Autonomy is a measurable criterion that reflects AI goodness. It represents the value of a system, since automation is the goal – of any machine. Machines exist to do things that would otherwise need to be done by humans. That’s why we build them. The more autonomous, the more potentially valuable.

AI hype promises unrealistic autonomy. By viewing AI goodness as its degree of potential autonomy, we can identify an AI promise as hype when it promises infeasible autonomy. For example, the story of near-term AGI represents the epitome of AI hype, since it promises supreme autonomy. The ill-defined buzzword agentic AI is also generally guilty of promising unrealistic autonomy.

Predictive AI Is More Autonomous Than Generative AI

With enterprise applications of generative AI, such as providing strategic advice or helping write marketing creatives or computer code, you generally need a human in the loop reviewing each output – every assertion, suggestion, inference, statement, segment of computer code and draft document that it generates. GenAI positions itself to take on consequential, human tasks, activities that attract scrutiny because they would require high levels of performance for the computer to operate without constant human supervision.

In contrast, by taking on functions that are more forgiving, many predictive AI projects can capture the immense value of full autonomy across the largest-scale operational functions. Bank systems instantly decide whether to allow a credit card charge. Websites instantly decide which ad to display and marketing systems make a million yes/no decisions as to who gets contacted. So do the analytics systems of political campaigns. E-commerce sets the price for each purchase, from flights to flashlights. Safety systems decide which bridge, manhole and restaurant to inspect. No human is in the loop for those specific decision-making steps.

This is The AI Paradox: Even as GenAI is so seemingly humanlike, since it is meant to take on human tasks, it generally demands human supervision at each step and for each output. Ironically, this means genAI is less potentially autonomous than predictive AI.

Recognizing this paradox could reorient many decision makers. People get excited over genAI because it is so humanlike and advanced. GenAI’s extraordinary capabilities are unprecedented, so it does indeed present many new valuable propositions. But if value excites you more than sexiness – if initiatives that would deliver the greatest improvements to enterprise efficiencies are your goal – then you should bump predictive AI projects far up your priority list, placing them at least as high as most genAI initiatives, at least for the foreseeable future.

Source: https://www.forbes.com/sites/ericsiegel/2026/01/26/the-ai-paradox-more-humanlike-means-less-autonomous/