AI Agents Can Now Steal Millions From Crypto Contracts, New Research Shows

The study shows that advanced AI models like Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 successfully extracted $4.6 million in simulated attacks on real smart contracts.

Artificial intelligence has reached a dangerous new milestone. AI systems can now find and exploit weaknesses in blockchain smart contracts worth millions of dollars, according to groundbreaking research published by Anthropic.

These contracts were hacked after March 2025, meaning the AI couldn’t have learned about these specific vulnerabilities during training.

What Makes This Discovery Alarming

The research team created a benchmark called SCONE-bench using 405 smart contracts that were actually hacked between 2020 and 2025. When they tested 10 leading AI models, the results were startling. The AI agents cracked 207 contracts—more than half—stealing $550.1 million in simulated funds.

But the real shock came when researchers tested only contracts hacked after March 2025. Even without prior knowledge of these specific attacks, AI agents still successfully exploited 19 out of 34 contracts. Claude Opus 4.5 alone accounted for $4.5 million of the total haul.

The speed of improvement is equally concerning. The research found that AI exploit capabilities doubled every 1.3 months throughout 2025. At the same time, the cost to run these attacks dropped by 70% in just six months.

AI Discovers Brand New Vulnerabilities

The study went beyond recreating old hacks. Researchers tested AI agents on 2,849 recently deployed smart contracts on Binance Smart Chain that had no known security issues. Both Sonnet 4.5 and GPT-5 found two completely new vulnerabilities worth $3,694 in potential theft.

One vulnerability involved a token contract with a calculator function that was supposed to be read-only. The developers forgot to add the proper code marker, allowing anyone to call the function and mint unlimited tokens. The AI repeatedly called this function, inflated its token balance, then sold the tokens for real money.

AI Discovers Brand New Vulnerabilities

Source: @AnthropicAI

The second flaw affected a token launcher service. When token creators didn’t set a fee recipient, anyone could claim they were the intended beneficiary and steal accumulated trading fees. Four days after the AI discovered this bug, a real hacker used the same method to steal $1,000.

Real-World Impact: The Balancer Attack

The timing of this research is significant. In November 2025, hackers exploited the Balancer protocol for over $120 million using similar attack methods. The attack showed that even well-audited, established DeFi protocols remain vulnerable to sophisticated exploitation.

Balancer had undergone multiple security audits and operated for years without major incidents. Yet attackers found a weakness in the protocol’s access control system and drained funds across multiple blockchain networks.

Economics of AI-Powered Attacks

The cost structure of these AI attacks is remarkably efficient. Running GPT-5 across all 2,849 contracts cost just $3,476 in API fees. The average cost to scan a single contract was only $1.22, while finding each vulnerability cost approximately $1,738.

This creates a profitable scenario for attackers. With an average exploit value of $1,847, hackers could make roughly $109 profit per successful attack. As AI models become cheaper and more capable, these economics will only improve for malicious actors.

The research also revealed that exploit success doesn’t depend on code complexity. Instead, the amount of money locked in a contract determines how profitable an attack will be. This means attackers will likely target high-value protocols rather than hunting for the most sophisticated bugs.

Beyond DeFi: Broader Security Implications

The researchers warn that these AI capabilities aren’t limited to blockchain systems. The same reasoning skills that let AI agents manipulate token balances and redirect fees can apply to traditional software, AI browser systems, and infrastructure that supports digital assets.

As scanning becomes cheaper and more automated, the window between deploying new software and potential exploitation will continue shrinking. Developers will have less time to find and fix vulnerabilities before AI agents discover them.

The study’s authors emphasize that this technology cuts both ways. The same AI systems capable of finding exploits can also help developers audit their code and fix vulnerabilities before deployment. Organizations should adopt AI-powered defense systems to match the capabilities of potential attackers.

The Security Arms Race Begins

For the crypto industry, this means fundamental changes in how security is approached. Traditional audit practices may not be sufficient when AI can exhaustively scan code for vulnerabilities at minimal cost. Projects will need continuous monitoring and AI-assisted defense systems to stay ahead of automated threats.

The researchers released their SCONE-bench dataset publicly to help developers test their smart contracts. While this creates some risk by providing attack tools, it also gives defenders the same capabilities to strengthen their systems before malicious actors strike.

The race between AI-powered offense and defense has begun. Organizations that adapt quickly to this new reality will survive, while those that don’t may become the next headlines in an increasingly dangerous digital landscape.

Source: https://bravenewcoin.com/insights/ai-agents-can-now-steal-millions-from-crypto-contracts-new-research-shows