Ethereum’s transparency has long been one of its greatest strengths—but for many real-world applications, it has also become a structural limitation. From MEV-driven trading inefficiencies to data leakage in DeFi, gaming, and AI-driven workflows, the assumption that everything must be public in order to be verifiable is increasingly being challenged. TEN Protocol is built around a different premise: that computation can remain provably correct without forcing users, developers, and businesses to expose sensitive inputs, strategies, or logic to the entire market.
In this CryptoSlate Q&A, the team behind TEN Protocol explains its concept of “compute in confidence” and why they believe privacy-first execution is a missing primitive in Ethereum’s scaling roadmap. Rather than launching a separate privacy ecosystem, TEN is designed as a full EVM environment anchored to Ethereum settlement and liquidity, allowing developers to selectively choose what should remain public and what should execute confidentially.
The discussion explores how this hybrid model reshapes user experience, mitigates MEV, enables sealed-bid markets and hidden order flow, and unlocks new categories of applications—from verifiable AI agents to provably fair iGaming. It also addresses the security and governance trade-offs of using Trusted Execution Environments, and how TEN’s architecture is designed to make failures detectable, contained, and recoverable rather than silently catastrophic.
Together, the Q&A offers a detailed look at how selective confidentiality could redefine trust, composability, and usability across the Ethereum ecosystem. 
For readers who are new to TEN Protocol, how do you explain in simple terms what “compute in confidence” means and what problem TEN is actually solving that existing Ethereum L2s do not?
At its simplest, “compute in confidence” means you can use a dapp without broadcasting your intent, your strategy, or your sensitive data to everyone watching the chain.
On most Ethereum L2s today, transparency is the default. Every transaction, its parameters, the intermediate execution steps and often even the “why” behind an action are visible. That level of openness is powerful for verification, but in practice it creates very real problems. Trades get front-run or sandwiched. Wallets and dapps leak behavioural and economic data. Games and auctions struggle to stay both fair and private. And many real-world or enterprise workflows simply cannot operate if inputs and logic have to be public by design.
This is the core structural limitation TEN addresses. Ethereum was built on the assumption that data must be visible in order to be verifiable. TEN keeps verifiability intact, but removes the idea that data itself has to be exposed. With the right privacy technology, you can prove computation is correct without revealing the underlying inputs or logic.
What that means in practice is confidence. Confidence that node operators can’t front-run you. That games aren’t quietly rigged. That bids aren’t being copied in real time. That competitors aren’t spying on strategy. That dapps aren’t extracting or monetising private user inputs.
You still get Ethereum-grade security and verification. You just don’t have to put everything on display to get it.
There are other privacy-focused and TEE-oriented projects in crypto; what is concretely different about TEN’s architecture and threat model compared to things like privacy L1s, rollups with off-chain proving, or MPC-based approaches?
TEN is built as privacy-first Ethereum execution, not as a parallel ecosystem. The goal is very narrow and very intentional: run EVM-style applications with selective confidentiality, while keeping settlement, composability, and liquidity anchored to Ethereum itself.
That design choice is what really sets TEN apart in practice.
If you look at privacy L1s, they often ask developers to move into a new world. New tooling, new execution semantics, and different assumptions around composability are common. TEN takes the opposite approach. It is meant to feel like Ethereum, not replace it. Developers keep the EVM, the standards they already use, and access to existing liquidity, while gaining confidentiality only where it actually matters.
ZK-based private execution offers extremely strong privacy guarantees, but those guarantees usually come with trade-offs for general-purpose applications. Circuit complexity, performance constraints, and developer friction can make everyday app development harder than it needs to be. TEN uses TEEs instead, targeting general-purpose confidential compute with a very different performance and developer-experience profile.
MPC-based approaches avoid trusting hardware vendors, which is a real advantage, but they introduce their own challenges. Coordination overhead, latency, and operational complexity can quickly translate into a poor user experience for normal applications. TEN accepts a hardware-rooted trust assumption, and then focuses on mitigating it through governance, redundancy, and rigorous security engineering.
At the core, the differentiator is this hybrid model. Things that should be public, like finality, auditability, and settlement, stay public. Things that must be private, like inputs, order flow, strategies, and secret state, remain confidential.
You talk about TEN making crypto feel like “normal apps” for end users, private, simple, trustworthy; what does that look like from a UX perspective, and how will using a TEN powered dapp feel different from using a typical Ethereum dapp today?
At a user level, it removes the constant feeling that everything you do is visible and potentially exploitable.
In a TEN-powered dapp, that shows up in small but meaningful ways. There’s no mempool anxiety and no watching your trades get sandwiched in real time. Intent is private by default, whether that’s bids, strategies, or execution thresholds. Users don’t have to rely on defensive workarounds like private RPCs or manual slippage hacks just to feel safe using an app.
What you’re left with is a much cleaner mental model, one that’s closer to Web2. You assume that your inputs and the application’s business logic aren’t automatically public, because in most software, they aren’t.
The shift itself is subtle, but it’s fundamental. Privacy stops being a bolt-on feature or an advanced setting only power users understand, and instead becomes a core product primitive that’s simply there by default.
Trusted Execution Environments introduce a different kind of trust assumption, namely reliance on hardware vendors and enclave security; how do you address concerns about side-channel attacks, backdoors, or vendor-level failures in your security and governance model?
That’s exactly the right kind of skepticism. TEN’s position isn’t that TEEs are magic or risk-free. It’s about being explicit about the threat model and designing the system so that a compromise is never silently catastrophic.
TEN assumes enclaves provide confidentiality and integrity within defined bounds, and then builds around that assumption rather than pretending it doesn’t exist. The goal is to make failures detectable, contained, and recoverable, not invisible.
From a security perspective, this shows up as defense-in-depth. There are strong remote attestation requirements, controlled code measurement and reproducible builds, and strict key-management practices, including sealed keys, rotation, and tightly scoped permissions. The enclave attack surface is deliberately minimized, with as little privileged code as possible running inside it.
Redundancy and fail-safe design are just as important. TEN avoids architectures where one enclave effectively rules the system. Where possible, it relies on multi-operator assumptions and structures protocols so that even a compromised enclave cannot rewrite history or forge settlement on Ethereum.
Governance and operational readiness complete the picture. Security isn’t only about cryptography; it’s also about how quickly and transparently a system can respond. That includes patching, revocations, enclave version pinning, and clear incident playbooks that can be executed without ambiguity.
The bottom line is this: TEN isn’t asking users to “trust nothing.” It’s about reducing the practical trust you need to place in operators and counterparties, and concentrating the remaining trust into a much narrower, auditable surface.
On the DeFi side, how do sealed-bid auctions, hidden order books, and MEV-resistant routing actually work on TEN in practice, and how can users or regulators gain confidence in systems where the core trading logic and order flow are intentionally encrypted?
At a high level, TEN works by changing what is public by default.
Take sealed-bid auctions. Instead of broadcasting bids in the clear, users submit them in encrypted form. The auction logic runs inside a TEE, so individual bids are never exposed during execution. Depending on how the auction is designed, bids may only be revealed at settlement, or not revealed at all, with only the final outcome published on-chain. That single change eliminates bid sniping, copy-trading, and the strategic leakage that plagues open auctions today.
The same idea applies to hidden order books. Orders aren’t visible in a way that lets others reconstruct intent or strategy in real time. Traders are protected from being systematically copied or exploited, while the system still produces execution results that can be verified after the fact.
MEV-resistant routing follows naturally from this model. Because user intent is never broadcast to a public mempool, the classic MEV pipeline of see, copy, and sandwich simply doesn’t exist. There’s nothing to front-run in the first place.
That naturally raises the trust question. If the core logic and order flow are encrypted, how can users or regulators be confident the system is behaving correctly?
The answer is that TEN separates privacy of inputs from verifiability of outcomes. Even when inputs are private, the rules are not. Anyone can check that the matching engine followed the published algorithm, that clearing prices were computed correctly, and that no hidden preference or manipulation took place.
On top of that, there are clear audit surfaces and mechanisms for selective disclosure. Regulators or auditors can be granted access under defined conditions, while the public still sees cryptographic commitments and on-chain proofs that execution was correct.
The result is a combination that’s rare in today’s DeFi: confidentiality of order flow paired with accountability of outcomes.
Verifiable AI agents are one of your flagship use cases; can you walk through a concrete example of an AI agent running on TEN, what stays private, what is publicly verifiable on-chain, and why that is better than running the same agent entirely off-chain?
A simple way to think about this is an AI-driven treasury rebalancer for a protocol.
When that agent runs on TEN, a lot of what makes it valuable stays private by design. The model weights or prompts, which are often the core intellectual property, never have to be exposed. Proprietary signals and paid data feeds remain confidential. Internal risk limits, intermediate reasoning, and decision logic aren’t leaked to the market. Even the execution intent stays private until the moment it’s committed.
At the same time, there’s a clear set of things that are publicly verifiable on-chain. Anyone can check that the approved code actually ran, via attestation. They can verify that an authorized policy module enforced the relevant constraints, and that the resulting actions respected the defined invariants. The final state transitions and settlement still happen on Ethereum, in the open, as usual.
That combination is what makes this meaningfully better than running the same agent entirely off-chain. Off-chain agents ultimately ask users to trust logs, operators, or unverifiable claims that “the bot followed the rules.” TEN removes that blind trust. It lets agents keep their competitive edge private, while still proving to users, DAOs, and counterparties that they acted strictly within their mandate.
iGaming has historically been plagued by trust issues, bots, and opaque RNG; how does TEN enable provably fair games while still keeping RNG seeds, anti bot logic, and game strategies private, and how do you see this fitting into existing regulatory frameworks for online gaming?
iGaming has always been built around a fundamental conflict: transparency is required to prove fairness, but secrecy is essential to protect RNG systems, security controls, and anti-bot logic. Expose too much, and the system is gamed. Hide too much, and trust collapses.
TEN resolves that conflict through selective confidentiality. Sensitive components stay private, while the rules and outcomes remain provable.
On randomness, this allows “provably fair” to be literal rather than aspirational. Games can use commit-reveal and verifiable randomness schemes where randomness is committed to in advance, outcomes are independently verifiable by players, and RNG seeds remain private until it’s safe to disclose, or are only partially revealed. Players get confidence in fairness without attackers gaining a usable blueprint.
The same principle applies to anti-bot and risk controls. Bot-detection heuristics and fraud systems run confidentially, which matters because once these mechanisms are public, sophisticated actors adapt immediately. Keeping them private preserves their effectiveness while still allowing the system to produce verifiable outcomes.
More broadly, this enables provable game integrity. Players can verify that a game followed its published rules and that outcomes weren’t manipulated, without exposing sensitive internals like security logic, thresholds, or strategy parameters.
From a regulatory perspective, this maps cleanly onto existing frameworks. Regulators typically care about auditability, fairness guarantees, and enforceable controls, not about forcing every internal mechanism into the open. TEN’s model of verifiable outcomes combined with selective disclosure aligns naturally with those requirements.
From a developer’s point of view, what does building a “selectively private” smart contract on TEN look like, how do they mark functions for TEE execution, and how do they test and debug logic that they cannot just log out to a public mempool?
From a developer’s point of view, the easiest way to think about TEN is that you’re building with two execution zones.
There’s a public zone, which feels like normal Ethereum development: standard EVM logic, public state, and composable contracts that behave the way you expect on any L2.
Then there’s the confidential zone, where specific functions and pieces of state execute inside TEEs, with encrypted inputs and tightly controlled disclosure.
In practice, developers explicitly decide what should run “in confidence” and what should remain public. The confidential side is where you’d put things like trade matching, RNG, strategy evaluation, or secret storage, while everything else stays in the open for composability and settlement.
The workflow shift shows up most in testing and debugging, because you can’t treat the public mempool as your always-on debug console. Instead, testing and debugging typically leans on local devnets with enclave-like execution, deterministic test vectors, and controlled debug modes during development. And rather than relying on public logs, you validate behaviour through verifiable commitments and invariants, proving that the system stayed within the rules even when the inputs are private.
The key change is moving away from mempool introspection as a debugging crutch, and designing for provable correctness from the start.
You highlight composability between private and public components as a key differentiator; what new application patterns do you expect to emerge from this hybrid model, and how can existing Ethereum protocols integrate TEN without completely rewriting their stack?
TEN’s hybrid model unlocks application patterns that are either extremely difficult or simply not possible on chains that are transparent by default.
One obvious pattern is private execution with public settlement. Sensitive logic like trade matching, strategy evaluation, RNG, or risk controls can run confidentially, while the final outcomes still settle publicly on Ethereum. You get privacy where it matters, without giving up verifiability or composability.
Another area is protected price discovery and dark liquidity. Sealed bids, hidden order books, and private routing make it possible to run fairer markets, while still producing outcomes that are verifiable on-chain. The market gets integrity without turning every participant’s intent into public data.
Games and AI agents are another natural fit. Hands, strategies, prompts, or model internals can remain private, while fairness, correctness, and settlement stay provable. That combination is very hard to achieve in a fully transparent execution environment.
You also start to see selective disclosure applications emerge. Things like identity, reputation, compliance, or eligibility checks can stay private, while still enforcing public rules and producing auditable results.
What makes TEN distinct is that none of this requires abandoning Ethereum. TEN is a full EVM, so existing Ethereum smart contracts deploy on TEN out of the box and behave exactly as developers expect. The difference is that they immediately gain the option to run parts of their logic in confidence.
For many protocols, integration can be straightforward. Teams can deploy the same contracts to TEN alongside Ethereum, keep the public version unchanged, and then progressively enable confidential execution where it adds the most value.
That naturally creates two adoption paths. Some teams will take the minimal-effort route, deploying existing contracts unchanged and gaining both a public and confidential instance with almost no extra work. Others will take a progressive approach, selectively moving high-value flows like order flow, auctions, games, or agent logic into confidential execution over time.
The key point is that TEN doesn’t force developers to choose between composability and confidentiality. It lets them keep Ethereum’s ecosystem, liquidity, and tooling, while making privacy a first-class capability rather than a bolt-on.
Who operates the enclaves and infrastructure that power TEN, how do you avoid centralization around a small set of operators, and what does the roadmap look like for decentralizing the network, bootstrapping the ecosystem, and attracting the first breakout apps on TEN?
Like most new networks, TEN starts with a practical bootstrap phase. Early on, that means a smaller, more curated set of operators and infrastructure, with the focus squarely on reliability and security. The goal at this stage isn’t maximal decentralization on day one, but making sure the system works predictably and safely as developers start building real applications on it.
Avoiding long-term centralization is where the architecture and incentives really matter. The roadmap is built around permissionless operator onboarding, paired with strong attestation requirements so operators can prove they’re running the right code in the right environment. Economic incentives are designed to encourage many independent operators rather than a small cartel, and there’s an explicit emphasis on geographic and organizational diversity. On top of that, performance and security criteria are transparent, and the protocol itself is structured to prevent any single operator from dominating execution.
In terms of how the roadmap unfolds, the first phase is about bootstrapping reliability and developer tooling. Once that foundation is solid, the focus shifts to shipping flagship applications that genuinely need confidentiality, things like iGaming, protected DeFi workflows, and verifiable AI agents. From there, operator participation expands, governance decentralizes, and the security posture continues to harden as more value flows through the network and the stakes rise.
That’s what sets up the ecosystem flywheel. Builders don’t come to TEN just because it’s another EVM; they come because it offers capabilities they can’t get elsewhere.
The breakout app thesis is straightforward. The first truly successful TEN-native application will be something that either cannot exist, or cannot be competitive, on transparent-by-default chains. In that case, confidentiality isn’t a checkbox feature. It’s the product itself.