The L2 compromise is broken, it’s time for a new foundation

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

The second quarter of 2025 has been a reality check for blockchain scaling, and as capital keeps pouring into rollups and sidechains, the cracks in the layer-2 model are widening. The original promise of L2s was simple: scaling up L1s, but the costs, delays, and fragmentation in liquidity and user experience keep stacking up. 

Summary

  • L2s were meant to scale Ethereum, but they’ve introduced new problems, while relying on centralized sequencers that can become single points of failure.
  • At their core, L2s handle sequencing and state computation, using Optimistic or ZK Rollups to settle on L1. Each comes with trade-offs: long finality in Optimistic Rollups and heavy computational costs in ZK Rollups.
  • Future efficiency lies in separating computation from verification — using centralized supercomputers for computation and decentralized networks for parallel verification, enabling scalability without sacrificing security.
  • The “total order” model of blockchains is outdated; moving toward local, account-based ordering can unlock massive parallelism, ending the “L2 compromise” and paving the way for a scalable, future-ready web3 foundation.

New projects like stablecoin payments start questioning the L2 paradigm, asking if L2s are truly secure, and are their sequencers more like single points of failure and censorship? Often, they’ll end up taking a pessimistic view that perhaps fragmentation is simply inevitable in web3. 

Are we building a future on a solid foundation or a house of cards? L2s must face and answer these questions. After all, if Ethereum’s (ETH) base consensus layer were inherently fast, cheap, and infinitely scalable, the entire L2 ecosystem as we know it now would be redundant. Countless rollups and sidechains were proposed as “L1s’ add-ons” to mitigate the fundamental constraints of the underlying L1s. It’s a form of technical debt, a complex, fragmented workaround that has been offloaded onto web3 users and developers. 

And to answer these questions, it’s necessary to deconstruct the entire concept of an L2 to its fundamental components, to reveal a path toward a more robust and efficient design.

An anatomy of L2s

Structure determines function. It’s a basic principle in biology that also holds in computer systems. To decide the proper structure and architecture of L2s, we must examine their functions carefully. 

At its core, every L2 performs two critical functions: Sequencing, i.e., ordering transactions; as well as computing and proving the new state. A sequencer, whether a centralized entity or a decentralized network, collects, orders, and batches user transactions. This batch is then executed, resulting in an updated state (e.g., new token balances). This state must be settled on the L1 for security via Optimistic or ZK Rollups.

Optimistic Rollups assume all state transitions are valid, and rely on a challenge period (often 7 days) where anyone can submit fraud proofs. This creates a major UX trade-off, long finality times. ZK Rollups use zero-knowledge proofs to mathematically verify the correctness of every state transition before it hits L1, enabling near-instant finality. The trade-off is that they’re computationally intensive and complex to build. ZK provers themselves can be buggy, leading to catastrophic consequences, and formal verification of these, if feasible at all, is very expensive.

Sequencing is a governance and design choice for each L2. Some prefer a centralized solution for efficiency (or maybe for that censorship power; who knows), while others prefer a decentralized solution for more fairness and robustness. Ultimately, L2s decide how they wanna do their own sequencing. 

State Claim Generation and Verification is where we can do much, much better in efficiency. Once a batch of transactions is sequenced, computing the next state is a purely computational task, and that can be done using just a single supercomputer, focused solely on raw speed, without the overhead of decentralization at all. That supercomputer can even be shared among L2s! 

Once this new state is claimed, its verification becomes a separate, parallelized process. A massive network of verifiers can work in parallel to verify the claim. Such is also the very philosophy behind Ethereum’s stateless clients and high-performance implementations like MegaETH.

Parallel verification is infinitely scalable

Parallel verification is infinitely scalable. No matter how fast L2s (and that supercomputer) produce claims, the verification network can always catch up by adding more verifiers. The latency here is precisely the verification time, a fixed, minimal number. This is the theoretical optimum by using decentralization effectively: to verify, not to compute. 

After sequencing and state verification, the L2’s job is nearly complete. The final step is to publish the verified state to a decentralized network, the L1, for ultimate settlement and security.

This final step exposes the elephant in the room: blockchains are terrible settlement layers for L2s! The main computational work is done off-chain, yet L2s must pay a massive premium to finalize on an L1. They face a dual overhead: the L1’s limited throughput, burdened by its total, linear ordering of all transactions, creates congestion and high costs for posting data. Furthermore, they must endure the L1’s inherent finality delay. 

For ZK Rollups, this is minutes. For Optimistic Rollups, it’s compounded by a week-long challenge period, a necessary but costly security trade-off.

Farewell, the “total order” myth in web3

Since Bitcoin (BTC), people have been trying hard to squeeze all transactions of a blockchain into a single total order. We are talking about blockchains after all! Unfortunately, this “total order” paradigm is a costly myth and is clearly overkill for L2 settlement. How ironic, that one of the world’s largest decentralized networks and the world’s computer behaves just like a single-threaded desktop! 

It’s time to move on. The future is local, account-based ordering, where only transactions interacting with the same account need to be ordered, unlocking massive parallelism and true scalability.  

Global ordering of course implies local ordering, but it is also an incredibly naive and simplistic solution. After 15 years of “blockchain”, it is time that we open our eyes and handcraft a better future. The distributed systems scientific domain has already transitioned from the 1980s’ strong consistency concept (which is what blockchains implement) to 2015’s strong eventual consistency model that unleashes parallelism and concurrency.  Time for the web3 industry to move on as well, to leave the past behind and follow forward-looking scientific progress.

The age of the L2 compromise is over. It’s time to build on a foundation designed for the future, from which the next wave of web3 adoption will come.

Xiaohong Chen

Xiaohong Chen

Xiaohong Chen is the Chief Technology Officer at Pi Squared Inc., working on fast, parallel, and decentralized systems for payments and settlement. His interests include program correctness, theorem proving, scalable ZK solutions, and applying these techniques to all programming languages. Xiaohong obtained his BSc in Mathematics at Peking University and PhD in Computer Science at the University of Illinois Urbana-Champaign.

Source: https://crypto.news/l2-compromise-is-broken-its-time-for-a-new-foundation/