OpenAI admits: ChatGPT is used for malware development

OpenAI has confirmed in an official report that ChatGPT has been used to develop malware in several cases.

listen Print view
Magnifying,Glass,Enlarging,Malware,In,Computer,Machine,Code

(Image: Balefire / Shutterstock.com)

3 min. read

OpenAI has commented in detail on how cybercriminals have used the ChatGPT model in the past to develop malware and prepare cyberattacks. The AI company published a report in which it documented over 20 cases from 2024 in which cybercriminals used ChatGPT for cyberattacks or malware development.

The report, titled "Influence and Cyber Operations: An Update", reveals that state-sponsored hacker groups from countries such as China and Iran used ChatGPT's capabilities to improve existing malware and develop new malware. They primarily used ChatGPT to debug new malware code, generate content for phishing campaigns and spread disinformation on social media.

According to the report, a slightly different threat came from the Iranian group "CyberAv3ngers", which is said to be linked to the Islamic Revolutionary Guards. They did not use ChatGPT directly to develop malware. Rather, they used the AI to research vulnerabilities in industrial control systems and then program targeted scripts for potential attacks on critical infrastructure.

In other cases, the AI model was used to develop phishing malware to steal user data such as contacts, call logs and location information. Although these findings are alarming, OpenAI emphasized that cybercriminals did not achieve any significant breakthroughs in malware creation through ChatGPT. There has also been no increase in successful malware attacks due to the misuse of ChatGPT.

As the online portal Cybersecuritynews writes, many security experts fear that the risk of misuse will increase and continue with the further development of AI technology. Experts such as former US federal prosecutor Edward McAndrew point out that companies that use ChatGPT or similar chatbots could be held liable if they induce someone to commit a cybercrime.

US tech companies often invoke Section 230 of the Communications Decency Act of 1996 to get out of responsibility for illegal or criminal content on platforms. In simple terms, the law states that operators of portals cannot be held responsible for illegal content posted by their users as long as they did not create the content themselves. McAndrew explained on Cybersecuritynews that this law may not protect OpenAI from legal action in the case of malware development, as the content originates directly from the chatbot.

Videos by heise

The suspicion of abuse of ChatGPT by cybercriminals is not new. As early as 2023, Sergey Shykevich, lead ChatGPT researcher at Israeli security company Check Point, told Business Insider that cybercriminals were using the chatbot to deploy malicious code. His team had already observed cybercriminals using AI to develop code for ransomware attacks in 2023.

Other cyber security experts such as Justin Fier, Director of Cyber Intelligence & Analytics at Darktrace, also see ChatGPT and other AI systems as significantly lowering the barrier to entry for developing malicious code. ChatGPT could make it easy for people without programming skills to create malware and phishing emails, as they would only need to pay attention to suitable prompts.

(nen)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.