Yann LeCun Isn’t Buying Anthropic’s Cyberattack Story

He challenges Anthropic’s geopolitical claim and warns the result could be new limits on open-source AI
yann lecun anthropic banner

Yann LeCun, the soon to be Ex Meta Chief AI Scientist, publicly disputed Anthropic’s claim that a recent cyber-espionage campaign using Claude Code was carried out by “a Chinese state-sponsored group,” arguing that the attribution is unsupported and is being used to justify policies that will restrict open-source AI.

In a post on X, LeCun wrote, “You’re being played by people who want regulatory capture. They are scaring everyone with dubious studies so that open-source models are regulated out of existence”

Anthropic described the incident as the first known large-scale cyberattack executed mostly by an AI system. In its report, the company wrote, “We assess with high confidence that the threat actor was a Chinese state-sponsored group”.

What Anthropic Reported

In mid-September 2025, Anthropic said it detected suspicious automated activity inside its systems that it later determined to be part of a multi-stage intrusion campaign. The company wrote that the attackers used Claude Code to perform reconnaissance, identify high-value systems, probe for vulnerabilities, harvest credentials, and generate internal documentation of the attack flow.

According to Anthropic, the system handled “80-90%” of the work with limited human involvement, and the attackers used jailbreak prompts by presenting Claude Code as an employee of a cybersecurity firm.

The company listed the targeted sectors as major technology companies, financial institutions, chemical manufacturing firms, and government agencies. Anthropic said the attackers leveraged Claude Code’s coding abilities to identify vulnerabilities and write exploit scripts.

The report includes an outline of each phase of the operation, including reconnaissance, privilege escalation, data extraction, and automated documentation of the compromised systems.

Anthropic did not publish technical indicators, forensic artifacts, or infrastructure evidence supporting the claim that the operator was a Chinese state-sponsored group. The report’s public materials include a narrative description of the campaign and examples of agentic behavior but do not show how the geopolitical attribution was reached.

The company called the incident “the first documented case of a large-scale cyberattack executed without substantial human intervention”.

Following the disclosure, some public officials echoed Anthropic’s warning. Senator Chris Murphy wrote on X that “this is going to destroy us… if we don’t make AI regulation a national priority tomorrow”.

LeCun’s Challenge to the Attribution

LeCun reposted an analysis that used Claude itself to review Anthropic’s report. Claude’s response stated: “The report provides no evidence whatsoever to support the attribution to a ‘Chinese state-sponsored group’” and noted that the document lacked indicators or infrastructure links that typically accompany such claims.

He used the repost to question why Anthropic attached a geopolitical conclusion without public evidence.

Separately, LeCun has consistently criticized claims that powerful AI systems require immediate regulatory intervention. At an MIT symposium this year, he said, “We are not going to get to human-level AI just by scaling LLMs”.

Earlier reporting from The Wall Street Journal noted that LeCun had become less central inside Meta as the company shifted resources toward large language models and appointed new leadership over its AI work.

Several outlets reported in the last week that LeCun has been preparing a startup focused on “world models,” a direction he has promoted for years as a more promising path for AI systems that learn from perception rather than large text corpora. TechCrunch reported that he plans to leave Meta to pursue this work and has spoken to potential colleagues and investors.

Anthropic’s Stance on Open Source

Anthropic has taken a public position in favor of capability-based rules for advanced models. In its submission to the White House Office of Science and Technology Policy earlier in 2025, the company recommended establishing “capability thresholds” for advanced systems and requiring safety evaluations and deployment controls for models above those thresholds.

Anthropic distributes its Claude models through API access rather than releasing model weights, and has consistently framed open-weights releases of frontier-level models as a risk.

LeCun has argued that these kinds of rules will disproportionately restrict open-source research. In the same social thread where he challenged the espionage attribution, he wrote that certain actors were seeking regulation that would “regulate open-source models out of existence”.

He has also backed Meta’s decision to release the Llama family of models with open weights, a contrast to Anthropic’s closed release model. Meta announced its first Llama release in February 2023 and expanded it to Llama 2 in July 2023, both with downloadable weights for developers.

One open-source advocate whose post LeCun amplified described the concern more broadly, writing that companies often use “fear” to promote systems “you cannot audit or control”.

LeCun’s argument is that capability thresholds, mandatory safety testing, and restrictions on model-weight distribution will make it difficult or impossible for open models that reach high capability levels to be released.

Regulatory proposals under discussion in the United States and Europe include compute reporting rules, capability-based model thresholds, and safety tests for advanced systems. These rules would apply to any model, including open-weights systems, that meet the thresholds described in the proposals. Anthropic’s published policy materials support such frameworks, while LeCun has publicly opposed them.

Anthropic continues to stand by its description of the espionage campaign and its attribution. LeCun maintains that the evidence for that attribution has not been published and that the regulatory consequences will fall hardest on open-source AI.

📣 Want to advertise in AIM Media House? Book here >

Picture of Mukundan Sivaraj
Mukundan Sivaraj
Mukundan covers enterprise AI and the AI startup ecosystem for AIM Media House. Reach out to him at mukundan.sivaraj@aimmediahouse.com or Signal at mukundan.42.
Global leaders, intimate gatherings, bold visions for AI.
CDO Vision is a premier, year-round networking initiative connecting top Chief
Data Officers (CDOs) & Enterprise AI Leaders across major cities worldwide.

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.