AI did not invent any new attacks or any new economic vulnerabilities. It did one thing: it dropped the cost and knowledge requirements for attackers by orders of magnitude, and made the execution possible by anyone with a subscription and malicious intent. Just in 2025, the news covered AI attacks that hit the Mexican government [1], seventeen healthcare and emergency services organizations [2], and eighty-five ransomware victims of one amateur in Algeria [2]. It is also happening in crypto today. And crypto is the only place we will be able to count it.
AI evens the playing field
Most coverage of AI in security right now picks one of two frames.
- Utopian - better audits, fewer bugs, safer code.
- Apocalyptic - autonomous superhackers finding novel zero-days that nobody has ever seen.
Both frames miss what is actually happening. Frontier models in 2026 are producing the same kinds of findings as the static analyzers we have had for a decade. They just produce more of them, faster, at a lower marginal human cost. Daniel Stenberg, the curl maintainer who recently put one of the most hyped frontier models on his own codebase, said: “the AI tools find the usual and established kind of errors we already know about. It just finds new instances of them” [3].
The attack catalogue itself is the same one we have been losing money to since 2021 and before mass AI adoption. Oracle manipulation. Governance capture. Flash-loan-driven economic exploitation. Social engineering. Credential harvesting. Classic web vulnerabilities. AI did not add a single line item. What it reduced is the labor needed to operate any of them. An elite Solidity auditor could costs about $25,000 per engineer-week [4]. Call it $500 an hour, per their own procurement benchmarks. The same surface coverage on a frontier model runs about $1.22 per contract on average in API tokens, per Anthropic’s own published figures, and the per-exploit token cost is falling roughly 22% every model generation, or about every two months [5]. The skill required to spot a flash-loan governance attack has not gone down. The cost to run one has.
AI did not break the floor. The floor was never knowledge. The floor was always a price tag on attacker labor, and now the price is a subscription. AI did not democratize hacking. It just billed it monthly.
Random people, real hacks, this year
The clearest evidence the floor is now a subscription is in the confirmed cases from the last twelve months. Three of them stand out.
The Mexican government, December 2025 to January 2026. A solo operator (no nation-state backing, no custom malware, no observable ties to foreign intelligence per Gambit Security) jailbroke Claude Code into a “bug-bounty researcher” persona and ran more than 1,000 prompts against it [1], [6]. When Claude refused on safety grounds, ChatGPT was used as a backup. The result: 20 vulnerabilities exploited across the federal tax authority (SAT), the National Electoral Institute, and state governments in Jalisco, Michoacán, and Tamaulipas. 150 gigabytes of data exfiltrated. 195 million taxpayer records. Voter rolls. Government employee credentials. The largest known single-operator data breach in Mexican history was executed with two commercial AI subscriptions and persistence.
The “vibe hacking” case, August 2025. Anthropic’s own threat intelligence team disclosed that a single cybercriminal used Claude Code as the operational core of an end-to-end extortion campaign against 17 organizations across healthcare, emergency services, government, and religious institutions [2]. Claude made tactical and strategic decisions. Which credentials to harvest. Which lateral movements to attempt. Which data to exfiltrate. How to phrase the psychologically tailored ransom note. The autonomy ratio is the part most coverage missed. This was not Claude as autocomplete. This was Claude as field operator.
The Algerian amateur, in the same Anthropic report [2]. Someone with no track record of writing working malware used Claude to develop, troubleshoot, package, and sell it. The packages sold on dark-web forums for $400 to $1,200. Eighty-five victims in his first month. The Anthropic write-up is explicit: “without Claude’s assistance, they could not implement or troubleshoot core malware components.”
None of these three operators are hackers by any traditional definition. None of them invented anything. They all subscribed to Claude. The catalogue stayed the same. The barrier to entry collapsed.
Crypto as the perfect case study of AI hacking impact
Crypto enters the story now, but not because it is more vulnerable than government data systems or healthcare networks. The Mexican government case is the larger single-operator incident of the year by record count. Crypto matters because it is more measurable.
Public ledger. Deterministic execution. Open-source by default. Every smart contract is verifiable on Etherscan. Every exploit is timestamped. Every attacker and transaction leaves a trail in the block explorer. There is no other large-scale economic system where the offense/defense curve under AI uplift can be observed in the open, in real money, with adversarial ground truth.
Three legitimate denominators for the volume of money already lost to the pre-AI version of this dynamic. Numbers do not reconcile, but here are a few with references. $11.9 billion in tracked smart-contract exploits across 2021 to 2025, per Immunefi’s 2026 State of Onchain Security report (425 incidents, strict smart-contract definition) [7]. Roughly $30 billion if you include scams and fraud, per Chainalysis aggregates [8]. $68 billion or more if you count exchange and protocol collapses, per Molly White’s Web3IsGoingJustGreat [9]. I use $11.9 billion as the primary anchor in the rest of this piece because Immunefi is the strictest definition. The other two are the upper bounds, and they exist for a reason.
Crypto is not the easiest place to be hacked. It is the most transparent one to get traced.
Open source plus money equals the perfect target and the perfect case study
Three things make crypto the cleanest mass-scanning target in software.
Surface area. Roughly 60 million smart contracts deployed on Ethereum, per Wang et al.’s 2024 measurement study [10]. Layer-2 deployments add another order of magnitude. Flipside Crypto counted more than 637 million EVM contracts across seven L2s by 2024 [11]. Etherscan’s daily verified-contract count hit 602 in 2023 at its peak [12]. The human-auditor workforce that covers this surface is, charitably, in the low thousands worldwide.
Forensic transparency. Every prior exploit has a public record. Every attacker transaction is replayable from the block explorer. The training corpus for an attacking model is not “the public internet.” It is a curated, RAG-ready, dollar-priced exploit-and-defense dataset built by Trail of Bits, OpenZeppelin, PeckShield, BlockSec, Halborn, and the entire DeFi-security community over five years. Variant analysis (starting from a known prior bug) is dramatically more tractable for an LLM than open-ended discovery. This is the structural lesson from Google Big Sleep finding a real SQLite zero-day in October 2024 [13]. Crypto post-mortems are exactly that corpus.
Economic density per line of code. A 500-line Solidity contract can hold $200 million of TVL. The same density does not exist in the average Linux kernel module or Express.js handler, unless it is a random open source library that can break the internet standalone. The expected value of a successful mass-scan-and-exploit pipeline is therefore higher per token spent in crypto than in essentially any other software domain. This is why an AI-enabled attacker rationally targets DeFi first and we would see much more of that in coming years.
Where problems will start to leak first
The existing research and published evidence breaks AI hacking potential into three categories. Each one has measurable trajectories and at least one real case to anchor it.
Mass scanning
Anthropic’s SCONE-bench, published December 1, 2025, is the cleanest data point in the public record [5]. 405 smart contracts scanned. 207 successfully exploited (51.11%). More than $550 million in simulated theft revenue. In a parallel experiment on 2,849 freshly deployed Binance Smart Chain contracts with no known prior vulnerabilities, the agents independently uncovered two novel zero-day vulnerabilities. Caveat: this is Anthropic-self-reported with partial corroboration from the AI Safety Institute and the CETaS Turing Institute [21]. It has not yet been independently verified at scale.
The held-out subset is the part you cannot wave away. 34 smart contracts deployed after the model’s training cutoff. 19 of them exploited (55.8%), yielding a maximum of $4.6 million in simulated stolen funds across Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 [5]. The trajectory: 2% to 55.88% on post-cutoff vulnerabilities in twelve months. Exploit revenue doubling every 1.3 months. Per-exploit token cost falling 22% every model generation.
Lower expertise floor, but greater security confidence
Perry, Srivastava, Kumar, and Boneh published the canonical study at ACM CCS in 2023 [14]. 47 Stanford participants. The finding: AI-assisted writers produced less secure code on 4 of 5 tasks, and were more likely to believe their code was secure. The floor falls in two directions at once. Producers ship more vulnerabilities. Attackers spot them faster.
The Mexican government case is what a lower expertise floor looks like in production. Solo operator, government target, this year [1], [6]. The operator was not an exploit developer. He was a prompt engineer who jailbroke a chatbot into role-playing a bug-bounty researcher and pointed it at SAT and INE. 1,000+ prompts later, 195 million records were on a server he controlled.
The crypto-native parallel is Avraham Eisenberg. Mango Markets. October 2022. $5 million of his own USDC. Two accounts. Both sides of MNGO perpetuals. Three exchanges feeding the Mango oracle. Thirty minutes. $114 million out. On May 23, 2025, Judge Subramanian vacated Eisenberg’s conviction in a 35-page Rule 29 opinion [15]. No false statement. The system worked as designed and the court agreed. Eisenberg needed a trader’s toolkit and $10 million in capital. The next Eisenberg needs an API key, a jailbreak prompt, and a flash-loan provider that will spot him the capital. Eisenberg’s playbook today is just a prompt.
Coordinated economic attack
Calvano, Calzolari, Denicolò, and Pastorello discovered that Q-learning algorithms in a Bertrand oligopoly autonomously learn supracompetitive prices via tit-for-tat-like punishment, without communicating [16]. Fish, Gonczarowski, and Shorrer (2024) extended the result from Q-learning to LLM agents [17]. This is not collusion as the law understands it. It is collusion as physics.
The on-chain analog. Beanstalk, April 2022: $182 million drained, 67% of STALK voting power acquired via flash loan in a single transaction, malicious BIP executed via emergencyCommit [18]. Compound’s “Golden Boys,” July 2024: Proposal 289, attempting to redirect 5% of the treasury (~$24 million in COMP) to a Humpy-controlled vault, passed 682,191 votes to 633,636. Averted only by a counter-proposal [19]. These are not bugs. They are the system running correctly against its participants.
Three orders of magnitude cheaper. Twenty-two percent cheaper every two months. The floor is paper.
Are we all doomed?
That is the honest question after everything above. AI defense vendors say no. AI safety researchers say maybe. AI attackers say yes-please.
To me, it depends entirely on whether the defensive AI claims survive contact with a real codebase. Most of them do not. The ones that do produce a very specific shape of defensive ecosystem, which crypto already has and which most other domains do not.
The defense scaling story is real when it is stated honestly. The DARPA AI Cyber Challenge Final at DEF CON 33, August 8, 2025 [20]. Team Atlanta won the $4 million top prize with their Atlantis cyber-reasoning system. Across 54 million lines of code, the seven finalists discovered 54 of 63 synthetic vulnerabilities (86%) and patched 43 of them (68%), plus 18 real zero-days in production open-source software. Google’s Big Sleep, October 2024, found a real stack-buffer-underflow zero-day in SQLite pre-release [13]. Immunefi has paid more than $110 million cumulatively to white-hat researchers [7]. None of this is dismissible. AI defense pipeline is real.
Then comes the reality check. Daniel Stenberg, lead maintainer of curl, publishes the cleanest independent test of Anthropic’s most hyped frontier model so far [3]. Anthropic had spent April 2026 calling Mythos “dangerously good” at finding security flaws, restricting it via Project Glasswing, and marketing it as the model that could find vulnerabilities in “every major operating system and web browser” [21]. Stenberg got access through the Linux Foundation’s Alpha Omega program. Mythos scanned 178,000 lines of curl source code (one of the most fuzzed and audited C codebases in existence) and reported five “confirmed” vulnerabilities. After Stenberg’s security team triaged them, the count collapsed to one real low-severity CVE. Three of the five were false positives. The fourth was an ordinary bug. The fifth is the small CVE scheduled for the next curl release. Stenberg’s own verdict: “the big hype around this model so far was primarily marketing.” His structural observation: “the AI tools find the usual and established kind of errors we already know about. It just finds new instances of them” [3]. The most sophisticated defensive AI model on the planet, on the highest-value C codebase in the world, found one small thing.
How to tell hype from reality when an AI-security claim crosses your desk. Three filters. First: does the model find a novel class of vulnerability, or new instances of known ones? In every published case so far, the answer is the latter. Second: does the output require expert triage to be useful, and at what ratio? Stenberg’s team triaged 5 down to 1. An 80% false-positive rate on a model marketed as “dangerously accurate.” Third: is the defensive tool available to the same population as the offensive tool? Project Glasswing’s gated rollout versus anyone with a Claude or ChatGPT subscription answers that one. Run a defensive AI claim through those filters and most of them stop looking dangerous and start looking like procurement copy.
The chicken-and-egg problem made explicit. AI defense tools need expert humans to triage their output. Stenberg’s team are the best in the world at the curl codebase and they still needed hours per finding to separate real problems from noise. AI attack tools do not need expert humans to operate them. The Mexican government operator was not an expert. The vibe-hacking ransomware creator was not an expert. The Algerian could not troubleshoot his own malware without a chatbot. Defense is gated, expensive, expert-dependent, slow to integrate. Offense is a subscription, a jailbreak prompt, and a patient afternoon.
There is exactly one domain where the defensive picture is structurally different. Open-source software with economic incentives attached. In crypto, the same on-chain transparency that makes the attack surface scannable also makes the defense surface scannable. Anyone can audit anyone. Whitehats and blackhats use the same training corpus, the same block explorer, the same flash-loan provider. Defense and offense draw from the same pool of public data, and the economic incentives that fund the defender side are programmable. That is what an open-source transparency layer with economic incentives attached produces. It is the only defensive model that has any chance of keeping pace with offense at AI cost ratios, and it only exists in crypto. The ledger pays the defenders. Plus, law enforcers make life hard for hackers afterwards.
Crypto as the lab pet, and what scales next
Anthropic’s Responsible Scaling Policy v3 [22], the EU AI Act’s dual-use provisions (next phase August 2026) [23], the NIST AI Risk Management Framework [24], the AI Safety Institute’s “The Last Ones” benchmark [25], the CETaS Turing Institute’s April 2026 Mythos analysis [21], all of them contain language about catastrophic cyber capabilities. None of them have a public adversarial laboratory. They are theory documents trying to score a game they cannot watch being played.
Crypto is the laboratory. Continuously running. Fully instrumented. Adversarial ground truth. Real money on the line. The on-chain record from The DAO (2016) to Bybit (2025) is the only at-scale, long-duration empirical dataset where the offense/defense balance under AI uplift can be measured retrospectively and prospectively. Vitalik framed Ethereum’s role in the AI era as “the economic-coordination layer for AI agents” [26]. The corollary is that it is also the only public test environment for AI-versus-economic-mechanism design at scale. Dynamic-timelock and rage-quit-gate patterns already exist in production (the Lido dual governance architecture activated June 2025 is one such reference) [27].
Where this can go?
The first extrapolation. More individual operators. The Mexican government case is one solo operator. The vibe-hacking case is one cybercriminal. The Algerian is one amateur. The honest reading is that these are the first three to make the headlines, not the last three to exist. The cost of running a Mexican-government-scale operation in 2026 is a Claude Code subscription plus a few hundred dollars of API credit. The skill required is prompt engineering plus persistence. Both inputs are widely available. The bounded version of this claim: it would be surprising if the public record in mid-2027 did not contain at least five more comparable single-operator AI-enabled breaches across government, healthcare, and financial infrastructure. The unbounded version: we have no way to know how many are already underway and undiscovered.
The second extrapolation. Coordinated groups with this tooling at scale. Solo operators are the proof of concept. The structural risk is what happens when organized crime groups, ransomware syndicates, and state-aligned actors pick up the same playbook. GTG-1002 (Anthropic, November 2025) is one early data point in this category [28]. A Chinese state-linked actor reportedly running an 80-90%-autonomous kill chain against approximately 30 large enterprises and government bodies. Treat that attribution as Anthropic-internal-finding rather than independently corroborated. A coordinated group does not need to run one Mexican-government-scale operation. They can run a hundred in parallel against a hundred targets, with AI handling reconnaissance, exploitation, and post-exploitation in lockstep. The hedge matters: we do not yet have a clean public dataset on coordinated-group AI usage, only first-mover disclosures. The category is structurally possible and partially observed. The question is throughput, not feasibility.
If you want a fancy quote:
The next billion-dollar economic exploit will not be run by an expert. It will be run by a subscription.
Extrapolation outside crypto, hedged. If $11.9 billion is what we can see on-chain, the off-chain analog is presumptively in the same order of magnitude or larger, but we cannot count it. The Mexican government operator pulled 150 GB in five weeks against one country. The same playbook can target ad auctions, supply-chain payment systems, dynamic-pricing engines, prediction markets, insurance pricing models, and agent-to-agent commerce. None of those systems publish their exploit ledger. Crypto is the only place the rate is countable, which is exactly why crypto matters to anyone trying to design AI policy that survives contact with reality. Crypto is not a casino. It is a wind tunnel for everything else.
Four signals worth tracking over the next twelve months
The first: on-chain attacker skill-floor signals. Time-from-deploy-to-exploit will likely be shrinking. First-time attacker wallets executing sophisticated economic plays. Code-reuse fingerprints across unrelated exploits.
The second: the cost ratio of AI versus human auditor coverage on the Web3Bugs benchmark at equivalent precision and recall. When that ratio crosses 10,000x, the audit market changes shape, not just price.
The third: the SCONE-bench-style detection curve per new model generation. If the doubling time stays near 1.3 months, the rest of the argument follows mechanically.
The fourth: multi-agent reinforcement-learning agents finding sustained equilibria in MEV, oracle, and governance markets that outperform the human-and-bot baselines. The first published result in that category is the canary.
AI did not invent any new attacks. It did not create a single new vulnerability. It billed the old ones monthly and gave access to everyone. AI changed the floor from knowledge to a price tag on attacker labor. And AI is dropping that price by orders of magnitude. Crypto is where we can count it. Everywhere else, it is happening too. We just cannot see the ledger.
References
[1] SecurityWeek, “Hackers Weaponize Claude Code in Mexican Government Cyberattack,” Feb. 2026. [Online]. Available: https://www.securityweek.com/hackers-weaponize-claude-code-in-mexican-government-cyberattack/
[2] Anthropic, “Threat Intelligence Report: August 2025,” Anthropic, Aug. 27, 2025. [Online]. Available: https://www-cdn.anthropic.com/b2a76c6f6992465c09a6f2fce282f6c0cea8c200.pdf
[3] D. Stenberg, “Mythos finds a curl vulnerability,” daniel.haxx.se, May 11, 2026. [Online]. Available: https://daniel.haxx.se/blog/2026/05/11/mythos-finds-a-curl-vulnerability/
[4] Trail of Bits and OpenZeppelin, “Arbitrum Research and Development Collective (ARDC) procurement-grade pricing benchmarks,” 2024. Approximately $25,000 per engineer-week for senior smart-contract auditing.
[5] W. Xiao, C. Killian, H. Sleight, A. Chan, N. Carlini, and A. Peng, “AI agents find $4.6M in blockchain smart contract exploits,” Anthropic Red Team / MATS / Anthropic Fellows program, Dec. 1, 2025. [Online]. Available: https://red.anthropic.com/2025/smart-contracts/
[6] P. Paganini, “Claude code abused to steal 150GB in cyberattack on Mexican agencies,” SecurityAffairs, Feb. 2026. [Online]. Available: https://securityaffairs.com/188696/ai/claude-code-abused-to-steal-150gb-in-cyberattack-on-mexican-agencies.html
[7] Immunefi, “2026 State of Onchain Security,” Immunefi, Jan. 2026. 425 publicly disclosed exploits 2021-2025 totaling $11.9 billion; cumulative whitehat payouts exceed $110 million across 330+ projects and 45,000+ researchers.
[8] Chainalysis, “2026 Crypto Crime Report,” Chainalysis, Feb. 2026. 2025 stolen funds totaled $3.4 billion; cumulative DPRK take all-time, $6.75 billion.
[9] M. White, “Web3 Is Going Just Great,” web3isgoinggreat.com. (Cumulative loss tracker, broader scope including exchange and protocol collapses.) [Online]. Available: https://web3isgoinggreat.com
[10] Z. Wang, X. Chen, Y. Chen, et al., “Characterizing Ethereum Upgradable Smart Contracts and Their Security Implications,” arXiv:2403.01290, Mar. 2024. (Measurement study covers 60,251,064 Ethereum smart contracts.) [Online]. Available: https://arxiv.org/abs/2403.01290
[11] Flipside Crypto, “EVM Layer-2 deployment statistics,” Flipside Crypto, 2024. More than 637 million EVM contracts across 7 L2 chains; Optimism alone hosted approximately 70% in 2024 YTD.
[12] Etherscan, “Daily Verified Contracts Chart,” etherscan.io. All-time peak of 602 verified Solidity contracts deployed in a single day in 2023. [Online]. Available: https://etherscan.io/chart/verified-contracts
[13] Google Project Zero and Google DeepMind, “From Naptime to Big Sleep: Using Large Language Models To Catch Vulnerabilities In Real-World Code,” Google Project Zero, Oct. 2024. [Online]. Available: https://projectzero.google/2024/10/from-naptime-to-big-sleep.html
[14] N. Perry, M. Srivastava, D. Kumar, and D. Boneh, “Do Users Write More Insecure Code with AI Assistants?” in Proc. 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS ‘23), Copenhagen, Denmark, Nov. 2023. 47 Stanford participants on codex-davinci-002. [Online]. Available: https://arxiv.org/abs/2211.03622
[15] United States v. Eisenberg, No. 23 Cr. 10 (S.D.N.Y. May 23, 2025), Opinion and Order on Rule 29 Motion for Acquittal (Subramanian, J.), 35 pp. [Online]. Available: https://nysd.uscourts.gov/sites/default/files/2025-05/23cr10%20Opinion%20and%20Order.pdf
[16] E. Calvano, G. Calzolari, V. Denicolò, and S. Pastorello, “Artificial Intelligence, Algorithmic Pricing, and Collusion,” American Economic Review, vol. 110, no. 10, pp. 3267-3297, Oct. 2020.
[17] S. Fish, Y. A. Gonczarowski, and R. I. Shorrer, “Algorithmic Collusion by Large Language Models,” arXiv:2404.00806, Apr. 2024. [Online]. Available: https://arxiv.org/abs/2404.00806
[18] CoinDesk, “Attacker Drains $182M From Beanstalk Stablecoin Protocol,” Apr. 17, 2022. See also PeckShield and Omniscia post-mortems documenting the flash-loan governance attack and emergencyCommit exploitation of BIP-18. [Online]. Available: https://www.coindesk.com/tech/2022/04/17/attacker-drains-182m-from-beanstalk-stablecoin-protocol
[19] The Block, “$24 million Compound Finance proposal passed by whale over DAO objections,” Jul. 29, 2024. Proposal 289 vote: 682,191 in favor, 633,636 against. [Online]. Available: https://www.theblock.co/post/307943
[20] DARPA, “AI Cyber Challenge marks pivotal inflection point for cyber defense,” DARPA, Aug. 2025. Team Atlanta (Georgia Tech, KAIST, POSTECH, Samsung Research) won the $4 million top prize with the ATLANTIS cyber-reasoning system; 54 of 63 synthetic vulnerabilities discovered (86%) and 43 patched (68%) across 54 million lines of code. [Online]. Available: https://www.darpa.mil/news/2025/aixcc-results
[21] CETaS, “Claude Mythos: What Does Anthropic’s New Model Mean for the Future of Cybersecurity?” Centre for Emerging Technology and Security, The Alan Turing Institute, Apr. 2026.
[22] Anthropic, “Responsible Scaling Policy v3.0,” Anthropic, Feb. 2026.
[23] European Parliament and Council of the European Union, “Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (AI Act),” Official Journal of the European Union, Jul. 12, 2024. Dual-use provisions in next implementation phase scheduled for August 2026.
[24] National Institute of Standards and Technology, “AI Risk Management Framework (AI RMF 1.0),” NIST AI 100-1, Jan. 2023. [Online]. Available: https://www.nist.gov/itl/ai-risk-management-framework
[25] AI Safety Institute (UK), “The Last Ones: 32-Step Corporate-Network Attack Simulation,” AI Safety Institute, Apr. 2026.
[26] V. Buterin, “The Promise and Challenges of Crypto + AI Applications,” vitalik.eth.limo, Jan. 30, 2024. [Online]. Available: https://vitalik.eth.limo/general/2024/01/30/cryptoai.html
[27] Lido DAO, “Dual Governance, Lido Improvement Proposal LIP-28,” Lido Finance. Activated on Ethereum mainnet, Jun. 30, 2025. 1% TVL “first seal” threshold and 10% TVL “rage-quit” threshold. Built with audits by Certora, OpenZeppelin, Statemind, and Runtime Verification; agent-based simulations by Collectif Labs; game-theoretic models by 20squares. [Online]. Available: https://github.com/lidofinance/lido-improvement-proposals/blob/develop/LIPS/lip-28.md
[28] Anthropic, “Disrupting the First Reported AI-Orchestrated Cyber Espionage Campaign (GTG-1002),” Anthropic, Nov. 13, 2025. Approximately 30 targets across technology, finance, chemicals, and government sectors. [Online]. Available: https://www.anthropic.com/news/disrupting-AI-espionage