Computing powerhouse Nvidia’s Rubin platform can cut the cost of running advanced AI models, a claim that challenges crypto networks built to monetize scarce GPU compute.
Officially launched Monday at CES 2026, Rubin is Nvidia’s new computing architecture that improves the efficiency of training and running AI models. It is deployed as a system of six co-designed chips — branded under the Vera Rubin name in honor of the American astronomer Vera Florence Cooper Rubin — and is now in “full production,” Nvidia CEO Jensen Huang said.
For crypto projects built on the assumption that compute stays scarce, those gains can challenge the economics behind their models.
However, past improvements in computing efficiency have tended to increase demand rather than reduce it. Cheaper and more capable compute has repeatedly unlocked new workloads and use cases, pushing overall usage higher even as costs fell.
Some investors appear to be betting that dynamic still applies, with GPU-sharing tokens such as Render (RENDER), Akash (AKT) and Golem (GLM) rising more than 20% over the past week.
Most of Rubin’s efficiency gains are concentrated inside hyperscale data centers. That leaves blockchain-based compute networks competing in short-term jobs and workloads that fall outside the AI factories.
Why Render benefits when compute gets cheaper
One modern example of efficiency expanding demand is cloud computing. Cheaper and more flexible access to compute through providers like Amazon Web Services lowered barriers for developers and companies, leading to an explosion of new workloads that ultimately consumed more compute.
That runs counter to the intuitive assumption that efficiency should reduce demand. If each task requires fewer resources, fewer servers or GPUs should be needed.
In computing, it rarely is. As costs fall, new users enter, existing users run more workloads, and entirely new applications become viable.
Related: Why crypto’s infrastructure hasn’t caught up with its ideals
In economics, this is known as the “Jevons Paradox,” as described by William Stanley Jevons in his 1865 book, “The Coal Question.” The English economist observed that improvements in coal efficiency didn’t lead to reduced fuel usage but more industrial consumption.
Applied to crypto-based compute networks, consumer demand can shift toward short-term, flexible workloads that do not fit long-term hyperscale contracts.
In practice, that leaves networks like Render, Akash and Golem competing on flexibility. Their value lies in aggregating idle or underused GPUs and routing short-lived jobs to where capacity happens to be available, a model that benefits from rising demand but does not depend on controlling the most advanced hardware.
Render and Akash are decentralized GPU rendering platforms where users can rent GPU power for compute-intensive tasks like 3D rendering, visual effects or even AI training. They allow users to access GPU compute without committing to dedicated infrastructure or hyperscale pricing models. Golem, on the other hand, operates as a decentralized marketplace for unused GPU resources.
Decentralized GPU networks can deliver reliable performance for batch workloads, but they struggle to provide the predictability, tight synchronization and long-duration availability that hyperscalers are built to guarantee.
GPU scarcity expected throughout 2026
GPUs remain scarce because key components needed to build them are in short supply. High-bandwidth memory (HBM), a critical part of modern AI GPUs, is expected to be in shortage through at least 2026, according to components distributor Fusion Worldwide. Because HBM is required for training and running large AI models, shortages directly cap how many high-end GPUs can be shipped.
The constraint is coming from the very top of the semiconductor supply chain. SK Hynix and Micron, two of the world’s largest HBM producers, have both said their entire output for 2026 is already sold out, while Samsung has warned of double-digit price increases as demand outpaces supply.
Related: Bitcoin miners gambled on AI last year, and it paid off
Crypto miners were once blamed for driving GPU shortages, but today, the AI boom is pushing the supply chain into this state. Hyperscalers and AI labs are locking up multi-year allocations of memory, packaging and wafers to secure future capacity, leaving little slack elsewhere in the market.
That persistent scarcity is part of why decentralized compute markets can continue to exist. Render, Akash and Golem operate outside the hyperscale supply chain, aggregating underutilized GPUs and offering access on flexible, short-term terms.
They don’t solve supply shortages but provide alternative access for developers and workloads that cannot secure capacity inside tightly controlled AI data centers.
Bitcoin halvings push miners to AI
The AI boom is also reshaping the crypto mining industry, while Bitcoin (BTC) economics continues to change every four years due to halvings reducing block rewards.
Several miners are reassessing what their infrastructure is best suited for. Large mining sites built around access to power, cooling and physical space closely resemble the requirements of modern AI data centers. As hyperscalers lock up much of the available GPU supply, those assets are becoming increasingly valuable for AI and high-performance computing workloads.
That shift is already visible. In November, Bitfarms announced plans to convert part of its Washington State mining facility into an AI and high-performance computing site designed to support Nvidia’s Vera Rubin systems, while several rivals have pivoted to AI since the last halving.
Nvidia’s Vera Rubin does not eliminate scarcity but makes hardware more productive inside hyperscale data centers, where access to GPUs, memory and networking is already tightly controlled. The supply constraints, particularly around HBM, are expected to remain throughout the year.
For crypto, GPU scarcity creates space for decentralized compute networks to fill gaps in the market, serving workloads that cannot secure long-term contracts or dedicated capacity inside AI factories. These networks are not substitutes for hyperscale infrastructure but function as alternatives for short-term jobs and flexible compute access during the AI boom.
Magazine: Bitget’s Gracy Chen is looking for ‘entrepreneurs, not wantrepreneurs’