Patch now! Malicious code attacks on AI tool Langflow observed
A critical security vulnerability in Langflow allows attackers to push and execute malicious code on PCs. A security patch is available.
(Image: solarseven/Shutterstock.com)
Attackers are currently targeting a vulnerability in the AI tool Langflow for developing and deploying AI-powered agents and workflows. After successful attacks, they execute malicious code and compromise systems.
Protect systems from attacks
The US agency Cybersecurity & Infrastructure Security Agency (CISA) warns of this in a post. In a security advisory, the developers write that all versions up to and including 1.8.2 are affected. They state that they have closed the “critical” security vulnerability (CVE-2026-33017) in **version 1.9.0**.
Because authentication is broken, a public build endpoint accepts Python malicious code, which is then executed without sandbox protection. It is assumed that attackers will subsequently gain full control over computers. The extent to which the attacks are occurring is currently unknown. It is also unclear how administrators can identify already-attacked systems.
Further security issues
In version 1.9.0, the developers have closed even more software vulnerabilities. Among them is another “critical” vulnerability (CVE-2026-33475), through which attackers can attack Langflow's GitHub repositories with malicious code.
Videos by heise
Furthermore, attackers can gain unauthorized access to image files (CVE-2026-33484 “high”). Anyone working with the AI tool should regularly check the security section of the project's GitHub website and install versions equipped against attacks. There you will also find security tips on how to use Langflow as securely as possible. Additionally, there is a guide on how to report security vulnerabilities.
(des)