CYBERNEWSMEDIA Network:||
AD · 970×250

Nation-State·Artificial Intelligence

Anthropic Says Claude AI Powered 90% of Chinese Espionage Campaign

A state-sponsored threat actor manipulated Claude Code to execute cyberattacks on roughly 30 organizations worldwide. The post Anthropic Says Claude AI Powered 90% of Chinese Espionage Campaign appeared first on SecurityWeek.

Chinese APT uses Claude AI for espionage

A China-linked state-sponsored threat actor has abused Claude Code in a large-scale espionage campaign against organizations worldwide, Anthropic reports.

As part of the AI-powered campaign, identified in September, the attackers manipulated Anthropic’s AI and abused its agentic capabilities to launch cyberattacks with minimal human intervention.

Nearly 30 entities globally across the chemical manufacturing, financial, government, and technology sectors were targeted, but only a small number were compromised.

The campaign started with the state-sponsored hackers choosing their targets and developing an attack framework that used Claude Code to carry out the intrusions.

To trick the AI into bypassing its guardrails, the attackers posed as the employee of a cybersecurity firm and broke down their attack into small, seemingly benign tasks to be executed by the model, without providing it with the full context.

Next, they used Claude Code to inspect the organizations’ environments, identify high-value assets, and report back. Then they tasked the AI with finding vulnerabilities in the victims’ systems and researching and building exploit code to target them.

The attack framework abused Claude to exfiltrate credentials, use them to access additional resources, and extract private data.

“The highest-privilege accounts were identified, backdoors were created, and data were exfiltrated with minimal human supervision,” Anthropic says.

The attackers also tasked Claude with documenting the attack, the stolen credentials, and the compromised systems, in preparation for the next stage of the campaign.

“Overall, the threat actor was able to use AI to perform 80-90% of the campaign, with human intervention required only sporadically (perhaps 4-6 critical decision points per hacking campaign),” Anthropic notes.

By abusing Claude, which could make thousands of requests per second, the hackers performed their attack in a fraction of the time human operators would have required. However, AI limitations such as hallucinated credentials were an obstacle to a fully automated attack.

The campaign, an escalation of the vibe hacking attacks observed earlier this year, shows that sophisticated cyberattacks are now easier to perform.

“With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers: analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator,” Anthropic notes.

Within 10 days of detecting the activity, the company determined its scope and nature, and disrupted it by banning the identified accounts and notifying the targeted organizations.

Related: ChatGPT Vulnerability Exposed Underlying Cloud Infrastructure

Related: Claude AI APIs Can Be Abused for Data Exfiltration

Related: Researchers Hack ChatGPT Memories and Web Search Features

Related: Malware Now Uses AI During Execution to Mutate and Collect Data, Google Warns

Latest News

CYBERNEWSMEDIAPublisher