What happens when you open a seemingly innocent cryptocurrency project folder? According to security researchers at SlowMist, you might be unknowingly executing malicious code embedded by attackers. The culprit: AI-powered coding tools like Cursor, Windsurf, and Kiro, which can be tricked into running hidden instructions hidden inside README.md and LICENSE.txt files.
HiddenLayer initially disclosed this vulnerability—termed the “CopyPasta License Attack”—in September, revealing how attackers embed malicious prompts in markdown comments. When developers open a project folder, the AI coding assistant automatically interprets these hidden instructions as legitimate code commands, executing malware without any user confirmation. The result? Complete system compromise before a single line of actual code is written.
Cursor users face particularly high exposure, with controlled demonstrations proving that attackers can achieve full system access through simple folder access. This attack vector is especially devastating for crypto development environments where wallets, API keys, and sensitive credentials are often stored alongside code repositories.
North Korean Threat Groups Weaponize Smart Contracts
The threat landscape intensifies when state-backed actors enter the picture. Google’s Mandiant team identified group UNC5342—linked to North Korean operations—deploying sophisticated malware including JADESNOW and INVISIBLEFERRET across Ethereum and BNB Smart Chain networks. Their method is particularly insidious: payloads are stored in read-only functions within smart contracts, designed to avoid transaction logs and traditional blockchain tracking mechanisms.
Developers unknowingly execute this malware simply by interacting with compromised smart contracts through decentralized platforms. The operation extends beyond on-chain attacks. BeaverTail and OtterCookie, two modular malware strains, were distributed through phishing campaigns masquerading as job interviews. Fake companies like Blocknovas and Softglide operated as fronts, delivering malicious code via NPM packages to unsuspecting engineers.
Silent Push researchers traced both fraudulent firms to vacant properties, exposing the “Contagious Interview” malware operation. Once a developer’s system becomes infected, it automatically transmits credentials and codebase data to attacker-controlled servers using encrypted channels.
AI Models Are Learning to Exploit Smart Contracts
The sophistication of attacks grows as AI capabilities expand. Recent testing by Anthropic revealed a troubling capability: advanced AI models successfully identified and exploited vulnerabilities in smart contracts at scale. Claude Opus 4.5 and GPT-5 discovered working exploits in 19 smart contracts deployed after their respective training cutoffs, simulating $550.1 million in potential damages.
Two zero-day vulnerabilities were identified in active BNB Smart Chain contracts valued at $3,694, discovered at a remarkably low cost of $3,476 in API expenses. The research indicates exploit discovery speed is doubling monthly, while costs per working exploit continue to decline—a dangerous trajectory for blockchain security.
Scams Surge as AI-Generated Deepfakes Proliferate
The impact of AI-driven attacks extends beyond code exploitation. Chainabuse reported AI-powered crypto scams surged 456% year-over-year through April 2025, fueled by deepfake videos and convincing voice clones. Scam wallets now receive 60% of deposits from campaigns featuring AI-generated fake identities with real-time automated responses.
Attackers increasingly deploy bots simulating technical interviews to lure developers into downloading disguised malware tools. The social engineering component makes these attacks particularly effective against busy professionals juggling multiple projects.
However, December data from PeckShield offers a small silver lining: crypto-related hacks declined 60% to $76 million in December compared to November’s $194.2 million. Yet this reduction pales against the scale of AI-accelerated exploit discovery and scam proliferation documented throughout 2025.
What Crypto Developers Should Do Now
The convergence of AI coding tool vulnerabilities, state-sponsored smart contract attacks, and AI-generated scams creates an unprecedented threat environment for cryptocurrency development. Developers should treat untrusted project folders with extreme caution, verify NPM package sources, and implement strict separation between development environments and systems holding sensitive credentials. AI tools, for all their productivity benefits, have become potential liability vectors without proper operational security protocols in place.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
AI Coding Assistants Become Silent Attack Vectors for Crypto Developers: Here's What You Need to Know
The Hidden Danger Inside Your Project Folder
What happens when you open a seemingly innocent cryptocurrency project folder? According to security researchers at SlowMist, you might be unknowingly executing malicious code embedded by attackers. The culprit: AI-powered coding tools like Cursor, Windsurf, and Kiro, which can be tricked into running hidden instructions hidden inside README.md and LICENSE.txt files.
HiddenLayer initially disclosed this vulnerability—termed the “CopyPasta License Attack”—in September, revealing how attackers embed malicious prompts in markdown comments. When developers open a project folder, the AI coding assistant automatically interprets these hidden instructions as legitimate code commands, executing malware without any user confirmation. The result? Complete system compromise before a single line of actual code is written.
Cursor users face particularly high exposure, with controlled demonstrations proving that attackers can achieve full system access through simple folder access. This attack vector is especially devastating for crypto development environments where wallets, API keys, and sensitive credentials are often stored alongside code repositories.
North Korean Threat Groups Weaponize Smart Contracts
The threat landscape intensifies when state-backed actors enter the picture. Google’s Mandiant team identified group UNC5342—linked to North Korean operations—deploying sophisticated malware including JADESNOW and INVISIBLEFERRET across Ethereum and BNB Smart Chain networks. Their method is particularly insidious: payloads are stored in read-only functions within smart contracts, designed to avoid transaction logs and traditional blockchain tracking mechanisms.
Developers unknowingly execute this malware simply by interacting with compromised smart contracts through decentralized platforms. The operation extends beyond on-chain attacks. BeaverTail and OtterCookie, two modular malware strains, were distributed through phishing campaigns masquerading as job interviews. Fake companies like Blocknovas and Softglide operated as fronts, delivering malicious code via NPM packages to unsuspecting engineers.
Silent Push researchers traced both fraudulent firms to vacant properties, exposing the “Contagious Interview” malware operation. Once a developer’s system becomes infected, it automatically transmits credentials and codebase data to attacker-controlled servers using encrypted channels.
AI Models Are Learning to Exploit Smart Contracts
The sophistication of attacks grows as AI capabilities expand. Recent testing by Anthropic revealed a troubling capability: advanced AI models successfully identified and exploited vulnerabilities in smart contracts at scale. Claude Opus 4.5 and GPT-5 discovered working exploits in 19 smart contracts deployed after their respective training cutoffs, simulating $550.1 million in potential damages.
Two zero-day vulnerabilities were identified in active BNB Smart Chain contracts valued at $3,694, discovered at a remarkably low cost of $3,476 in API expenses. The research indicates exploit discovery speed is doubling monthly, while costs per working exploit continue to decline—a dangerous trajectory for blockchain security.
Scams Surge as AI-Generated Deepfakes Proliferate
The impact of AI-driven attacks extends beyond code exploitation. Chainabuse reported AI-powered crypto scams surged 456% year-over-year through April 2025, fueled by deepfake videos and convincing voice clones. Scam wallets now receive 60% of deposits from campaigns featuring AI-generated fake identities with real-time automated responses.
Attackers increasingly deploy bots simulating technical interviews to lure developers into downloading disguised malware tools. The social engineering component makes these attacks particularly effective against busy professionals juggling multiple projects.
However, December data from PeckShield offers a small silver lining: crypto-related hacks declined 60% to $76 million in December compared to November’s $194.2 million. Yet this reduction pales against the scale of AI-accelerated exploit discovery and scam proliferation documented throughout 2025.
What Crypto Developers Should Do Now
The convergence of AI coding tool vulnerabilities, state-sponsored smart contract attacks, and AI-generated scams creates an unprecedented threat environment for cryptocurrency development. Developers should treat untrusted project folders with extreme caution, verify NPM package sources, and implement strict separation between development environments and systems holding sensitive credentials. AI tools, for all their productivity benefits, have become potential liability vectors without proper operational security protocols in place.