AI-supported cyberattacks: experts observe increasing use of LLM
Russian spyware developed using large language models has been discovered for the first time. Experts see a turning point in the cyber arms race.
(Image: janews/Shutterstock.com)
This development came with an announcement: security researchers are currently seeing an increase in AI-supported attacks and therefore a turning point in the cyber arms race. For example, Russian spyware was recently discovered that was demonstrably created using large language models (LLM) and searches computers for sensitive data on its own. According to a report by NBC News, Russian intelligence services use the software to obtain certain information that is transmitted to Moscow by the malware.
Ukrainian authorities and several cyber security companies were able to detect the malware for the first time in July. The attack was specifically targeted at Ukrainian users. The attackers sent phishing emails with an attachment containing an AI program. Compared to other malware, this malware is much more targeted and does not require human interaction. This makes it much more efficient.
According to the cyber security company CrowdStrike, the use of AI tools in attacks is increasing. Chinese, Russian, and Iranian hackers and cybercriminals in particular are increasingly relying on them. This would take the cyber arms race to a new level.
Risk: AI agents
Experts see the increasing use of AI agents as a new risk for the future. The tools, which can carry out complex tasks independently, need extensive authorization in companies to do their work. If these are misused by attackers, they could pose a massive threat from within.
Videos by heise
The disadvantages are offset by AI to improve security. Google, for example, says it uses its LLM Gemini to scan its software for vulnerabilities before they are discovered by cybercriminals. At least 20 important, previously overlooked vulnerabilities have already been discovered in this way, according to Heather Adkins, Vice President of Security Engineering at Google.
The US government also sees AI as an aid in defense against cyberattacks, as it allows small companies, for example, to discover vulnerabilities before criminals can exploit them. However, penetration testing tools could also be upgraded accordingly in the future.
(mki)