Claude Code leaked: Billions for AI security, zero for software hygiene

No hack, no exploit – just a forgotten switch. And it is precisely this process failure that makes the Claude code leak so dangerous, argues Moritz Förster.

listen Print view
Anthropic stands on a smartphone, with Claude in the background.

(Image: Stockinq/Shutterstock.com)

5 min. read
Contents

It sounds like the next big scandal: Over 500,000 lines of source code from Anthropic's CLI tool, Claude Code, have surfaced publicly. The security community is on high alert, competitors are rubbing their hands, and commentators are sensing the next proof that AI is to blame for everything. But whoever looks closer will find no sophisticated attack, no zero-day exploit, not even social engineering. But simply a source map in the npm package that shouldn't have been there. So, just a forgotten switch in the build pipeline. The system here seems to be only the sloppiness.

A commentary by Moritz Förster
Ein Kommentar von Moritz Förster

Moritz Förster has been writing for iX and heise online since 2012. In addition to the iX channel, he oversees the Workplace section.

Source maps are useful aids in development. They map compiled code back to readable source code – indispensable for debugging, fatal in production. That they end up in the finished package doesn't happen through a sophisticated attack or a rampaging AI. It happens because nobody configured the build process correctly. Or because the configuration was quietly overwritten at some point. Or because nobody simply looked.

Developers know the pattern. It's the forgotten .env file in the Git repository. The Docker image with embedded credentials. The debug API that has been open for months because it's “only internal.” This is exactly process failure.

Modern build pipelines are extremely complex. Bundlers, transpilers, minifiers, packagers – each step generates artifacts; each step can pass through things that shouldn't go outside. Responsibility for this is distributed across a tohu wa-bohu of tools, configuration files, and teams. In the end, nobody feels responsible. “The pipeline will take care of it” is currently one of the most dangerous phrases in software development.

With classic projects, this usually works out. The artifacts are boring, the damage manageable. With an AI tool like Claude Code, it's different. Here, the code contains not only implementation details but also architectural decisions, feature flags for unpublished functions, and the complete orchestration logic of an agentic system. Anyone who can read this – and anyone with npm and some patience can – gets a blueprint for free.

AI companies are under enormous pressure to innovate. Releases follow in short cycles; features must be released before the competitor shows them. At this pace, security gates fall by the wayside. Not out of sloppiness or even malice, but out of pragmatism. The next demo counts more than the next audit.

The result: tools that deeply intervene in local development environments, read, write, and execute code, are treated with the same release discipline as a frontend widget. That this goes wrong is no surprise. It's only a matter of time.

The current incident doesn't seem like a completely isolated slip-up. According to media reports, it's already the second unintentional disclosure related to Claude Code in just over a year. In general, the industry has known about the problem for a long time. OWASP has listed “Cryptographic Failures” (formerly sensitive data exposure) in its Top Ten for ages. Nevertheless, it keeps happening – only the consequences are growing.

Videos by heise

Because leaked source code here is more than a PR issue. It shows competitors how Anthropic orchestrates agentic workflows. It shows attackers where the logic makes assumptions that can be exploited. It shows the public that a company that advertises billions for AI security falters on basic software hygiene.

The reflex after such incidents is predictable: more security tools, more monitoring, more external defense. But against what exactly? There was no attacker here who could have been stopped. No firewall in the world protects against a misconfigured build script.

What would help is less spectacular: automated checks that scan the package contents before each release. Clear responsibilities in the build process. A four-eyes principle for releases of sensitive tools. All things that should have been standard in classic software development long ago – and which apparently are not reliably implemented even at one of the world's best-funded AI companies.

And precisely because the actually established process is the problem, this incident should be more unsettling than a sensational AI hack. Against the negligence prevalent in many areas of the IT industry, only discipline helps – and as is well known, that is difficult to scale.

(fo)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.