Autonomous AI Cyberattack: Doubts about Anthropic's Investigation
Anthropic claims to have not only discovered but also stopped a largely autonomous, AI-driven cyberattack. But is that really true?
Anthropic's architecture diagram of the attempted attack.
(Image: Anthropic)
"The first publicly documented case of a large-scale, autonomous cyberattack executed by an AI model" – is reported by the AI company Anthropic on its website. A hacker group called "GTG-1002", which was "highly likely" funded by the Chinese government, allegedly manipulated Anthropic's Claude code tool to launch infiltration attempts against around 30 international targets largely autonomously. The coding AI Claude Code is said to have executed "80 to 90 percent" of the intrusion activities independently, writes Anthropic in its report. Ultimately, the attack was prevented, reports the AI company.
However, several independent security experts are now expressing doubts about how autonomous the attacks actually were. Cybersecurity researcher Daniel Card writes on X Daniel Card on X: "This Anthropic thing is a marketing stunt." Computer security expert Kevin Beaumont criticizes on Mastodon that Anthropic has not published any IoCs (Indicator of Compromise, "digital traces left by attackers in systems") of the attacks.
Videos by heise
"Sycophancy and hallucinations"
"I don't think the attackers were able to get the AI models to do what nobody else can do," the news website Ars Technica quotes the founder of Phobos Group, Dan Tentler. "Why are the models giving the attackers 90% of what they want, while we have to deal with sycophancy, obstruction, and hallucinations?"
However, there is a consensus that AI tools can significantly simplify and accelerate hacking workflows. Security researcher Bob Rudis writes on Mastodon: "I and others use AI for triage, log analysis, reverse engineering, workflow automation, and more."
Huge AI pentesting packages are also already available, for example, Hexstrike, which allows over 150 security tools to be operated by various autonomous AI agents. However, such software still requires intensive human intervention – and above all human expertise.
(jkj)