Vibe malware: Are detailed security messages encouraging AI imitations?
Tactics, techniques and procedures of attacker groups can be easily imitated with LLMs – perfect for false flag attacks. AI builds the malware.
(Image: Shutterstock.com)
Cyber criminals and state attackers are increasingly relying on AI to support their digital attacks. Security researchers from Trend Micro have now investigated the extent to which publications by security researchers make their work easier. They used unrestricted large language models (LLMs) to write malware based on their blog posts.
It is well known that malware authors draw inspiration from their counterparts in security companies, for example, from the Conti leaks. The fear: with AI support, cybercriminals no longer need to be able to read or program. They simply feed detailed security analyses to an LLM and have malware written for them. Employees of the security company Trend Micro investigated whether this works.
To do so, they used the software collection of a cyber threat active in Asia and Latin America called "Earth Alux" as a model for a copycat malware. In their experiment, the researchers used LLMs that contain no restrictions (guardrails) against the creation of malicious programs. They didn't have to find them in dark corners – they are available for download on Hugging Face. However, the resulting source code still needed some reworking, so the criminal career still requires some expertise. However, the malware clone resembled its role model in every published detail.
Free-riding made easy
This "copycat vibecoding" therefore does not appear to be primarily attractive for newcomers to digital crime but rather for groups that want to lure investigators onto the wrong track. For example, they could use imitation tactics, techniques, and procedures (TTPs) to attribute attacks to a hostile group, which makes the already often shaky and chaotic attribution process even more difficult.
(Image:Â Trend Micro)
Attacker groups are already using such tactics today, with suspected North Koreans, for example, inserting Russian code snippets into their malware. However, vibe coding based on security articles allows them to imitate more precisely and, above all, more efficiently, according to the Trend Micro analysis.
Videos by heise
No reason for muzzles
However, the authors of the blog article warn against hasty reactions and emphasize that people should not stop talking and writing about security threats. It is more important than ever to publish information about attacks and threats, but one must be aware of the dangers. Publishers of security reports or analyses need to investigate whether the published details of the attackers' actions enable AI-supported imitation. In addition, vibe coding makes it even more difficult to attribute attacks to attacker groups.
(cku)