The hypothetical risk to DeFi posed by AI has quickly become a stark reality. Illustration: Hilary B; Source: ShutterstockThe hypothetical risk to DeFi posed by AI has quickly become a stark reality. Illustration: Hilary B; Source: Shutterstock

Crypto hackers armed with AI stand to make millions of dollars attacking old code

2026/03/26 17:16
5 min read
For feedback or concerns regarding this content, please contact us at [email protected]

Artificial intelligence is making crypto hacking cheaper, easier, and faster — and it’s allowing attackers to overwhelm defenders and potentially steal millions of dollars.

Bad actors are now using the large language models that power AI chatbots like ChatGPT and Claude to search thousands of lines of code a second. Their goal? Identify vulnerabilities that have slipped by developers and auditors.

And it’s old, undermaintained deployments that are most at risk, crypto security experts warn.

“AI has made legacy-contract hunting cheaper, faster, and more scalable, especially for old forks, dusty deployments, under-maintained vaults, and inherited code paths,” Gabi Urrutia, field chief information security officer at Halborn, a blockchain security firm, told DL News.

“AI does not need to invent novel vulnerability classes to create more damage; it only needs to find old ones faster and at scale.”

Accelerating AI growth, fuelled by record levels of funding for top companies including OpenAI, Anthropic, and xAI, is rapidly reshuffling the crypto industry.

What was once a hypothetical risk posed by AI less than a year ago has quickly become a stark reality. The problem now is that bad actors are further ahead in utilising the technology than those tasked with securing the $130 billion DeFi ecosystem.

While evidence suggests attackers are using AI to speed up and scale their attacks, crypto security firms and auditors are only just starting to incorporate the technology into their defences.

The situation has spooked even the most veteran DeFi developers.

“Everybody should stop using DeFi. It’s simply too dangerous right now due to the rapidly increasing power of AI coding agents” Gerrit Hall, co-founder of smart contract security platform Firepan, who also spent five years working on DeFi exchange Curve Finance, said in a recent interview.

“Offensive capacity is improving far faster than defensive tooling.”

A numbers game

Previously, looking for bugs in code took a long time. For hackers, it was only worth it to spend time on high-value targets, where finding a bug would result in a big payout.

But now AI can automate much of the drudgery, hackers are searching anything and everything — even if they only stand to steal a few hundred dollars worth of crypto.

“Attackers can profit at much lower value thresholds than defenders can justify for equivalent detection effort,” Urrutia said. “That is enough to change attacker economics even without perfect attribution.”

In December, researchers at Anthropic, the developer behind Claude, published a report that showed its AI agents, working against a benchmark of 405 real smart contracts that had been exploited between 2020 and 2025, were able to exploit 63% of them. This would have hypothetically allowed the agents to steal a combined $4.6 million.

When Anthropic tasked its AI models with hacking 2,849 recently deployed contracts without any known vulnerabilities, those agents uncovered two novel vulnerabilities and produced exploits worth $3,694, while only spending $3,476.

“This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible,” the Anthropic researchers said.

Identifying AI-assisted exploits

To be sure, it’s difficult for security researchers to identify specific instances where hackers have relied on AI to exploit DeFi protocols.

Yet multiple security experts who spoke to DL News said that the evidence they’ve seen has convinced them it’s happening at scale.

“The patterns we are observing are consistent with AI-driven automation,” Stephen Ajayi, dapp audit technical lead at Hacken, a blockchain security firm, told DL News.

“We observe repeated, identical exploit attempts across multiple contracts simultaneously, which is consistent with scripted or agent-driven reconnaissance.”

According to Ajayi, attackers are now probing thousands of smart contracts in minutes, a volume that also suggests automation rather than manual effort.

The recent $26 million hack of DeFi protocol Truebit, while not confirmed to have been carried out with the help of AI, is a likely candidate, Urrutia said.

“Public analysis described the issue as a pricing-logic flaw in an older contract compiled with Solidity 0.6.10,” he said. “That is exactly the kind of target profile that becomes more attractive when AI can cheaply triage old codebases and surface exploitable edge cases.”

He’s not the only one to say so.

At the time, several other crypto security researchers speculated that the Truebit exploit, which targeted code deployed over five years prior, could have been identified with the help of AI.

Fighting back

The good news, experts say, is that there is plenty DeFi developers can do to fight back against AI-armed hackers. But it will require a fundamental change to what is considered secure.

“‘Audited once’ is no longer a serious security model,” Urrutia said. “If attackers can continuously re-scan the long tail of old contracts, then dormant risk becomes active risk again.”

Instead of DeFi protocols getting audited before their code goes live, they will need to invest in constant screening using the latest AI models to ensure potential exploits are discovered by the protocol’s developers first.

“If attackers are using AI to find vulnerabilities, defenders must do the same,” Ajayi said. “Automated adversarial testing — using AI agents to continuously probe production systems — will become standard practice, much like automated penetration testing is today.”

There has already been some success in using AI to catch code bugs. Last month, Octane Security, a self-described AI-native security firm, used its AI tool to find a high-severity bug in Nethermind, software that runs the Ethereum blockchain.

Even so, security researchers must still find ways to attribute whether AI played a role in exploits. “Without better audit trails and standardised logging for agent actions, defenders will remain behind,” Ajayi said.

For the builders themselves, the next few years could be the most challenging yet for the decentralised economy.

“If we manage to create something that doesn’t get exploited in the next decade, that’s good enough to create a nice tidy empire,” Firepan’s Hall said.

“The sad reality [is] most DeFi protocols won’t last so long.”

Tim Craig is DL News’ Edinburgh-based DeFi Correspondent. Reach out with tips at [email protected].

Market Opportunity
B Logo
B Price(B)
$0,2123
$0,2123$0,2123
+0,66%
USD
B (B) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.