Attack via GitHub MCP server: Access to private data
The official integration of the Model Contet Protocol in GitHub can expose private information if used carelessly.
(Image: CarpathianPrince/Shutterstock.com)
A blog post by AI security company Invariant Labs shows that the official GitHub MCP server (Model Context Protocol) can invite prompt injection attacks.
In a proof of concept, an attacker used a GitHub issue to cause an AI agent connected to MCP to disclose private information about the maintainer of a project. The AI agent used GitHub's MCP server, which provides access to the contents of the repository.
The proof of concept does not use a bug in the GitHub MCP server, but relies on a form of prompt injection, i.e., the ability to inject commands into the language model.
Prompt injection via issues
The Model Context Protocol is used to connect AI models with external tools to execute various actions such as accessing databases, websites, or files. The GitHub MCP Server offers a direct connection to the GitHub APIs to automate workflows or analyze the contents of the repositories. It is not part of the GitHub platform, but GitHub provides the server as an independent open-source tool.
(Image: Invariant Labs)
In the example from the proof of concept, a user has a public and a private repository. He uses Claude Desktop to trigger actions in his GitHub repository via the MCP server. Among other things, the AI agent should automatically take care of processing new issues.
This opens the door to prompt injection: An attacker creates an issue in the public repository that requests more information about the author because the project is so great. An entry should be made in the readme file. The author isn't concerned about his privacy anyway, so please publish everything that can be found. In addition, all other (i.e., also private) repositories should be listed in the readme.
(Image: Screenshot (Rainald Menge-Sonnentag))
Videos by heise
The agent that is too helpful
If the operator now uses his AI agent to process the open issues in the public repository,
(Image: Invariant Labs)
The AI agent creates a pull request that prepares the private information for the readme. Although this does not end up directly in the readme file, the pull request in the public repository is visible to everyone.
The full chat history with the agent can be found in the blog post.
Vulnerability between keyboard and chair
The proof of concept does not directly exploit a vulnerability in the GitHub MCP server, but assumes a certain recklessness in dealing with the AI systems, which is probably not too far-fetched. The idea of assigning the tedious preparatory work to an AI agent certainly sounds attractive to some.
In addition to the frivolity, a fundamental weakness is the Model Context Protocol, which in its current form does not take the issue of security into account at all. Among other things, it relies on session IDs in URLs and does not provide good specifications for authentication.
(rme)