Threat report: How cybercriminals are abusing Claude from Anthropic
In a report, Anthropic documents various examples of how cybercriminals use AI models for their own purposes. But is there a remedy?
(Image: Dmitry Demidovich/Shutterstock.com)
AI companies sometimes surprise us with how bluntly they describe the impact of their own products. Anyone reading Anthropic's detailed threat report (Threat Intelligence Report August 2025) will come across several findings on page 3 that are likely to diminish the reputation of AI as a major advance. "Agentic AI systems are being turned into weapons", it says. And AI is dramatically lowering the barrier to complex cybercrime, which means that criminals are now using it for all kinds of things.
What makes the US company's report special compared to other security reports from AI publishers, however, is that it does not remain vague. The cases of abuse described by the Anthropic security team are not theory or test results from the laboratory, but events that actually happened.
Remote work for the weapons program
It describes, for example, the case of North Korean IT workers applying en masse in the USA for jobs that they can do remotely. The income is presumably intended for the dictatorship's weapons programs. What makes the observation even more piquant is that, according to Anthropic, these workers often do not have the necessary specialist knowledge. Instead, they would constantly consult the AI during the application process and to get the job done, fooling their employers at well-known large companies into believing that they were doing the job.
Another case involves vibe hacking. A criminal used Claude Code to carry out automated data extortion. 17 organizations from governments, healthcare and emergency services were affected. The alleged perpetrator used the AI for everything from reconnaissance and malware development to ransom demands of up to 500,000 US dollars.
Videos by heise
AI as a romantic helper
AI is also very popular with fraudsters. They use AI for romance scams, for example. The victims are contacted via Messenger. As soon as they disclose personal information and photos of themselves, the AI model is fed this information in order to manipulate it to suit the person. Other criminals use Claude to develop and resell ransomware.
The result is that individuals are now able to use AI to cause damage that would otherwise require large teams. AI replaces a lack of specialist knowledge, helps with tailored blackmail strategies and is used to find suitable measures to circumvent defensive measures in real time. All of this leads to new challenges in cyber defense, according to the report.
What Anthropic is doing about it
Anthropic countered the discoveries with specialized detection systems, account blocking and cooperation with partners to counter crime. And transparent reporting is necessary, which explains the unsparing report. For the defense, it is necessary to work together across the industry. Anthropic has already joined forces with competitors in other contexts, such as the MCP protocol, which has created a standardized interface between AI models and databases. However, it remains to be seen whether the competition will also agree to show that, despite all their efforts, there are still massive loopholes for criminals in their AI models.
(mki)