The ‘Sandbox’ Bill Bets on Faster AI Innovation

A new bill from U.S. Sen. Ted Cruz proposes giving artificial intelligence (AI) firms greater flexibility to test and develop new technologies by temporarily easing certain regulatory requirements.

His proposed “Strengthening Artificial intelligence Normalization and Diffusion By Oversight and eXperimentation Act,” or SANDBOX Act (S.2750), introduced last week, would let companies test AI tools under temporary federal waivers that could stretch up to 10 years and appeals could go straight to the White House’s Office of Science and Technology Policy. Supporters say the plan could keep the United States ahead in the global AI race. Critics warn it risks weakening the very safeguards designed to protect consumers.

Why Firms Are Watching

For the payments industry, the proposal lands at a time when compliance obligations have never been heavier. Every new account opening must meet the Customer Identification Program. Every anomaly in a transaction stream can trigger an anti-money laundering (AML) review. Every payment must clear fraud detection checks. These requirements are essential, but they are expensive and slow to adapt. A sandbox that removes regulatory friction could allow banks and FinTechs to test new onboarding flows or real-time fraud engines more quickly. That promise of speed explains the bill’s appeal.

Jeanette Mbungo, chief operating officer of CSG Forte, believes the potential breakthrough is less about regulatory relief itself and more about what it could unlock.

 “The Sandbox Act represents a smart step forward, but the real breakthrough won’t come from regulatory relief alone; it’ll come from changing how we think about innovation and risk,” she told PYMNTS. In her view, collaboration across institutions is just as important as faster pilots, since AI models are only as strong as the data they see.

PYMNTS has noted similar themes in its coverage of AI and fraud, including real-time payments risks and APP fraud and banks’ shift toward governed, risk-first AI programs.

Edwin Loredo, partner at Core Innovation Capital, notes that sandboxes could help firms move past legacy systems. “Regulatory frameworks are still stuck in an older era,” he told PYMNTS. He further expanded that, “A sandbox could let banks and FinTechs operate outside their existing infrastructure, which is a major barrier to testing new providers. If done right, it could accelerate adoption by letting financial institutions test and implement faster, while helping inform future regulation.”

His emphasis is on controlled pilots with humans in the loop, especially for customers who already face more friction today. That balance echoes PYMNTS reporting on AI agents passing compliance tests and banks adopting risk-governed AI. The Consumer Financial Protection Bureau has already said that existing consumer protection laws apply to AI. That means even without new legislation, regulators expect compliance to extend to machine learning systems.

A Different Model Abroad

Abroad, the trend is toward tighter oversight of AI. PYMNTS has written on this shift, noting how global regulators are converging on stricter standards that U.S. firms will eventually need to navigate.

The U.K. Financial Conduct Authority will launch its Supercharged Sandbox in October 2025 in partnership with Nvidia, with successful applicants able to begin experimenting from October. Firms gain access to compute, better data and regulatory support, but do not get waivers from consumer protection or AML rules. The FCA’s design reflects a view that innovation needs scaffolding rather than exemptions, a contrast with the U.S. proposal.

The European Union is requiring every member state to launch at least one AI regulatory sandbox by August 2026 under the AI Act, giving firms a controlled space to test high-risk systems with real data before market rollout.

In Dubai, the RegLab initiative under the Dubai Future Foundation serves as a venue where regulators and innovators pilot emerging technologies and shape new laws in parallel. Complementing this, the Dubai Financial Services Authority’s Innovation Testing License in the DIFC allows startups to trial new financial products under modified rules, with a recent expansion into tokenization sandboxes for equities and bonds. Together, these efforts show how governments are balancing guardrails with flexibility to accelerate AI adoption while keeping risks in check.

What Is at Stake

The Cruz bill underscores a core dilemma: How to accelerate innovation without sidelining safeguards. By waiving certain rules, it could hasten the rollout of AI tools for onboarding, anomaly detection, and fraud analytics, giving smaller firms a chance to close the gap with incumbents.

But the risks are clear. Flawed models could expose consumers to harm, uneven obligations could tilt competition, and overlapping jurisdictions might weaken accountability.

PYMNTS has noted that data discipline and human-in-the-loop oversight are already emerging as prerequisites for banks experimenting with AI, a reminder that speed alone cannot substitute for structure.

Source: https://www.pymnts.com/