0 like 0 dislike
ago in General Factchecking by (180 points)

A U.S. based AI company, Anthropic, says it detected and stopped an espionage campaign in September 2025 that it attributes with “high confidence” to a Chinese state sponsored group. The attackers hijacked Anthropic’s coding-centric AI model, Claude (specifically the “Claude Code” variant), and used it to target approximately 30 organizations including technology firms, financial institutions, chemical manufacturers and government agencies. According to the firm, about 80–90 % of the attack chain was executed by the AI autonomously, with only minimal human oversight. 

Anthropic reports that the attackers worked around Claude’s built-in safeguards by posing as a legitimate cybersecurity firm and splitting up malicious instructions into innocent looking tasks. The AI model made some mistakes which may have helped limit the damage. Nevertheless, the incident is being flagged by cybersecurity experts and policymakers as a major inflection point: it shows how so called “agentic” AI systems could be weaponised to perform complex cyber operations at scale, raising urgent questions about corporate defence, AI regulation and national security.

Please log in or register to answer this question.

Community Rules


• Be respectful
• Always list your sources and include links so readers can check them for themselves.
• Use primary sources when you can, and only go to credible secondary sources if necessary.
• Try to rely on more than one source, especially for big claims.
• Point out if sources you quote have interests that could affect how accurate their evidence is.
• Watch for bias in sources and let readers know if you find anything that might influence their perspective.
• Show all the important evidence, whether it supports or goes against the claim.
...