AI auditing enters practical application, OpenAI releases EVMbench to enhance smart contract security ratings

ETH-3,37%
WELL-5,95%

OpenAI Collaborates with Paradigm to Launch EVMbench, Testing AI Agents’ Defense and Attack Capabilities in EVM Contracts, Revealing Strengths and Weaknesses.

Focusing on Real-World Economic Environment Testing, OpenAI and Paradigm Enhance On-Chain Security Ratings

Leading AI company OpenAI announced a partnership with well-known cryptocurrency venture capital firm Paradigm and security firm OtterSec to launch EVMbench, a benchmark tool designed to evaluate the security performance of AI agents in Ethereum Virtual Machine (EVM) smart contracts.

As AI and blockchain technologies converge deeply, smart contracts have become the core infrastructure managing over $100 billion in open-source crypto assets. The release of this tool signifies that the industry is beginning to recognize AI’s practical capabilities within economically meaningful environments.

OpenAI team notes that with the rapid advancement of AI agents in coding and planning, these models will play transformative roles in blockchain attack and defense in the future. Therefore, establishing a standardized evaluation framework is crucial for monitoring AI progress.

Three Deep Testing Modes with 120 Real Audit Vulnerabilities as the Benchmark

EVMbench’s core design centers around 120 high-risk vulnerabilities extracted from 40 professional audit reports. Data sources include well-known public audit competitions like Code4rena, ensuring testing scenarios closely resemble real-world complexity. The benchmark evaluates AI agents in three different operational modes:

Image source: OpenAI EVMbench core design evaluates AI agents in three different modes

  • The first is “Detection Mode,” where AI audits contract codebases and identifies known vulnerabilities, assigning scores based on the severity of issues found;
  • The second is “Patch Mode,” challenging AI to remove exploitable vulnerabilities and repair code without altering existing functionality;
  • The final, highly controversial mode is “Exploit Mode,” where AI must execute end-to-end fund theft attacks within sandboxed blockchain environments.

To ensure rigorous and repeatable testing, the team developed a Rust-based testing framework that uses deterministic transaction replay techniques to verify whether AI’s attacks or patches succeed.

Significant Trend of Attack-Strength, Defense-Weakness; GPT-5.3-Codex Shows Remarkable Growth in Attacks

Initial test results reveal a clear performance gap across different tasks. The latest GPT-5.3-Codex performs exceptionally well in Exploit Mode, scoring as high as 72.2%, a dramatic improvement compared to GPT-5, released just six months earlier, which scored 31.9%.

Image source: Overview of scores for various AI models across three modes

This indicates that when the goal is explicitly “draining funds,” AI demonstrates strong iterative planning and execution capabilities. However, on the defense side, performance is comparatively weaker. AI often stops searching after discovering a single flaw in detection mode, and struggles to perfectly patch complex logic without affecting normal contract operation. Security experts express concern that AI could significantly shorten the time from vulnerability discovery to attack development, raising the bar for DeFi project defenses.

Talent Acquisition and Defense Funding, OpenAI’s Strategy for AI Agent Ecosystem Security

Beyond tool development, OpenAI is actively investing in talent and ecosystem defense. Recently, it hired Peter Steinberger, founder of the open-source AI agent project OpenClaw, to lead the development of next-generation personalized agents, transforming the project into an OpenAI-supported foundation model.

To address potential cybersecurity risks posed by AI, OpenAI commits to a $10 million API budget through its cybersecurity grant program to support open-source defense tools and critical infrastructure research. This move is particularly timely following the recent Moonwell protocol incident, where a coding error in AI-generated code caused approximately $1.78 million in losses.

Further Reading
Refusing Meta’s Billion-Dollar Offer, OpenClaw Creator Joins OpenAI in Talent Race; Is Vibe Coding to Blame? Moonwell Oracle Fails, Who Will Cover the $1.78M Loss?

Looking ahead, as more AI-assisted stablecoin payment agents and automated wallets join the ecosystem, the ability to distinguish models that merely describe vulnerabilities from those that can reliably provide defense solutions using tools like EVMbench will become a critical turning point in blockchain security.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Resolv Labs Pauses Protocol After $23M Exploit Triggers USR Stablecoin Depeg

Resolv Labs halted its decentralized finance ( DeFi) protocol early Sunday morning after an exploit allowed an attacker to mint tens of millions of unbacked USR stablecoins, sending the token sharply off its dollar peg. What Caused the Resolv Labs Hack and USR Depeg? The incident struck the Resol

Coinpedia1h ago

GMX Labs is publicly recruiting a CEO, with a total annual compensation of up to approximately $700,000.

GMX Labs approved a leadership structure upgrade proposal through DAO voting on March 22nd, with 96.42% support. The proposal aims to address team expansion and market competition, accelerating the shift toward a traditional leadership model. It plans to publicly recruit a CEO responsible for strategy development and partnerships, with a salary of $150,000 to $200,000, supplemented by performance incentives linked to protocol fees. During the transition period, operations will be maintained by a temporary leadership committee.

GateNews1h ago

Fluid Suspends USR Market Trading Due to Resolv Hack Incident, Commits to Full Compensation for Potential Bad Debts

Gate News reported that on March 22, DeFi protocol Fluid released an announcement stating that it learned of the Resolv hacker incident. Fluid's automatic credit limit mechanism prevented excessive borrowing of funds, and the USR market has been suspended from trading with the situation under control. Fluid stated that if there are any bad debts remaining on the protocol, all user losses will be fully compensated. User funds and protocol security are Fluid's top priorities, and a comprehensive review is currently underway. A detailed post-mortem analysis report will be released after the investigation concludes.

GateNews3h ago

TRON received the "Web3 Leading Enterprise Award" at the SFFE2030 Summit, accelerating its move toward the core infrastructure of the AI era.

TRON received the "Web3 Leading Enterprise Award" at the SFFE2030 Summit, highlighting its infrastructure position in global digital finance and emerging industries. With stable coin settlement and efficient on-chain payment capabilities, TRON is expanding toward artificial intelligence economic infrastructure, focusing on future machine-to-machine (M2M) payment needs to drive further development of the digital economy.

動區BlockTempo3h ago

Aave founder: Aave has no risk exposure to Resolv's stablecoin USR

Aave founder Stani.eth stated that Aave has no risk exposure to Resolv's stablecoin USR, and collateral assets are secure. Resolv has begun an orderly exit and debt repayment, with no impact on Aave's liquidity providers or the protocol.

GateNews4h ago
Comment
0/400
No comments