Sam Altman, CEO of OpenAI, and Senator Ted Cruz, chairman of the Senate Commerce, Science, and Transportation Committee, following a hearing in Washington, DC.
© 2025 Bloomberg Finance LP
Yes, regulatory sandboxes can be a good idea. These controlled test beds for new technologies are moving to Washington, with Sen. Ted Cruz introducing a bill to establish federal AI sandboxes. Framed as exceptions from burdensome regulation, the proposal mirrors what has been done in the U.K. and Europe.
Artificial intelligence continues to race ahead of existing governance models, raising concerns about safety, security, and global competitiveness. Policymakers are scrambling to find tools that protect consumers without slowing innovation. Among these proposals is the introduction of regulatory sandboxes, controlled environments where companies can test new technologies under oversight but with temporary flexibility from certain rules.
Sen. Ted Cruz, chair of the Senate Commerce Committee, unveiled a bill to establish federal AI sandboxes. The initiative comes as dozens of countries experiment with sandboxes in finance, healthcare, and now AI. The European Union AI Act, for instance, requires member states to set up AI sandboxes, and the United Kingdom pioneered this model in financial services nearly a decade ago.
The evidence suggests this approach can work if designed with transparency, enforcement, and public safeguards in mind. Regulatory sandboxes promote innovation and foster learning. Yet they also risk regulatory capture and can distort the competitive environment by advantaging sandbox participants.
Regulatory Sandboxes: What They Are And Why They Matter
A regulatory sandbox is a structure in which innovators can test technologies under the watch of regulators without immediately facing the full weight of compliance. Borrowed from software development, the term has evolved into a legal and policy tool that allows experimentation while limiting risk.
The benefits are clear. Sandboxes reduce information gaps between regulators and firms, allow for iterative learning, and help craft adaptive policy. For startups, they provide a channel to engage directly with regulators, lowering uncertainty before rules are finalized. For governments, they offer a glimpse into emerging risks before technologies hit scale.
Still, not all sandboxes are equal. Some focus narrowly on compliance, while others encourage broader experimentation. The UK’s Financial Conduct Authority, for example, used its early sandbox to streamline innovation in FinTech, including digital payments and cryptocurrencies. The EU ties sandboxes directly to compliance with the AI Act. Each Member State is required to establish at least one nationwide AI regulatory sandbox by August 2026 or jointly participate in a sandbox with other countries, provided it offers equivalent national coverage. So far, countries such as Spain, Denmark and the Netherlands have functional AI sandboxes or pilots. Without clear guardrails, however, sandboxes risk becoming a loophole for regulatory arbitrage.
Regulatory Sandboxes: Innovation Meets Preemption
Sen. Cruz’s bill would create a nationwide regulatory sandbox overseen by the White House Office of Science and Technology Policy. It emphasizes reducing burdensome requirements and granting flexibility to companies developing AI systems.
The Strengthening Artificial Intelligence Normalization and Diffusion By Oversight and eXperimentation Act (SANDBOX Act) directs the OSTP to create a federal sandbox program for AI within a year of enactment. Companies could apply for temporary waivers from certain federal rules to test AI products, services, or development methods without full regulatory compliance. Applications must identify potential risks to health, safety, and consumers, and include mitigation plans. Waivers would last up to two years, with possible renewals, and require written agreements mandating transparency and incident reporting. Agencies would review requests, balancing innovation against risks such as economic harm or deceptive practices. Consumer disclosures are mandatory, and companies remain liable for damages. The program sunsets after 12 years, but Congress could act on recommended rule changes based on lessons from sandbox trials.
It is part of a broader light-touch regulatory framework proposal aligned with the strategy outlined by the AI action plan. In addition to regulatory sandboxes as a first step, the framework includes infrastructure permitting reform, access to federal datasets, free speech protections, curbs to illicit AI misuse, references to protecting US companies against foreign rules and an attempt to avoid the proliferation of state AI regulation.
This opens the door to federal preemption, raising the stakes in the ongoing tug-of-war between Washington and the states over who sets the rules.
A poorly implemented sandbox could undermine consumer protections. Exceptions that focus only on cutting red tape may encourage short-term gains at the expense of long-term safety. The challenge is balancing this deregulatory impulse with safeguards, consumer safety and privacy. I have long argued that sandboxes should not be standalone escape hatches but part of a layered system that integrates auditability, compliance, and accountability across public and private institutions. In that model, sandboxes serve as proving grounds for compliance regimes, generating data that can inform broader standards.
International precedents offer guidance. The EU’s sandboxes are not simply innovation labs but structured processes requiring oversight, public reporting, and stakeholder participation. Norway’s data protection authority, for instance, has run multiple sandbox projects since 2020, addressing issues such as automated decision-making and complex data sharing. By contrast, sandboxes that lack clear objectives or enforcement mechanisms often fail to build trust, leaving regulators and citizens skeptical.
Regulatory Sandboxes As Tools For Dynamic Governance
Properly designed, a regulatory sandbox can be more than a temporary carve-out. It can function as a bridge between innovation and governance, enabling regulators to test compliance models, assess risks, and refine policy before technologies scale.
For the United States, this means federal sandboxes should:
- Embed accountability: Outcomes must be reported publicly, and participation should not shield companies from later enforcement.
- Protect consumers: Safety standards must remain non-negotiable, even in experimental settings.
- Foster collaboration: Besides industry, civil society and academia should be integral to the process, not merely observers.
- Inform regulation: Insights from sandboxes should feed into standards and eventually legislation.
Sandboxes cannot replace regulation, nor should they become vehicles for evasion. Instead, they should serve as part of a dynamic governance model, complementing traditional oversight with agile experimentation. When properly implemented, they can build trust in American-made AI.
So, are regulatory sandboxes a good idea? Yes. They represent a step forward, provided they are not reduced to a slogan for deregulation. To be effective, sandboxes must strengthen, not weaken, the governance ecosystem.
Source: https://www.forbes.com/sites/paulocarvao/2025/09/10/are-ai-regulatory-sandboxes-a-good-idea/