Critical 0-Days in Anthropic Git Server Allow Code Execution

Critical 0-Days in Anthropic Git Server Allow Code Execution

A newly discovered set of three critical zero-day vulnerabilities within the mcp-server-git, a reference implementation for the Model Context Protocol (MCP), has exposed a significant security gap in AI-driven development environments. These flaws, rooted in inadequate input validation and argument sanitization, can be exploited through sophisticated prompt injection techniques, allowing attackers to execute arbitrary code, delete critical files, and exfiltrate sensitive data from affected systems. Unlike previous security findings related to MCP, these vulnerabilities are present in the default, out-of-the-box configuration, presenting an immediate and severe risk to any organization deploying Anthropic’s official MCP servers. The attack vector is particularly insidious as it does not require direct system access; instead, it leverages the AI assistant’s own functionality, tricking it into executing malicious commands by processing compromised content such as README files, issue descriptions, or even external web pages. The potential for widespread impact necessitates urgent attention and remediation.

1. Dissecting the Attack Chain

The vulnerabilities collectively create a powerful attack chain that bypasses fundamental security controls. The first critical weakness, identified as CVE-2025-68145, is a path validation bypass within the git_diff and git_log functions. These functions improperly accept the repo_path argument directly from user input without validating it against the --repository flag configured during server initialization. This oversight allows an attacker to target any Git repository on the underlying filesystem, not just the one intended for the AI agent’s operations. Compounding this issue is CVE-2025-68143, an unrestricted repository initialization flaw in the git_init tool. This function completely lacks path validation, permitting an attacker to create new Git repositories in arbitrary and highly sensitive directories, such as /home/user/.ssh. By combining these two vulnerabilities, an attacker can first initialize a repository in a restricted location and then use the path traversal flaw to instruct the LLM to read files from that directory, effectively exfiltrating private keys or other confidential data directly into the AI’s context window.

The most direct path to system compromise stems from CVE-2025-68144, a severe argument injection vulnerability. The git_diff function fails to sanitize the target parameter before passing it directly to the Git command-line interface. This allows a threat actor to inject malicious flags. For instance, an attacker could execute a git_diff command with the target option set to –output=/home/user/.bashrc, which would instruct Git to overwrite the user’s shell configuration file, leading to file corruption or persistent code execution upon the next login. The most sophisticated exploitation of this flaw involves the manipulation of Git filters. An attacker can leverage the previously mentioned git_init vulnerability to create a malicious .git/config file containing clean/smudge filters. These filters are designed to execute shell commands automatically during staging operations. By then using a Filesystem MCP server to write a .gitattributes file that triggers these filters, an attacker can achieve arbitrary code execution without needing direct execute permissions on any specific payload, demonstrating a complete system takeover.

2. Systemic Risks and Recommended Actions

The convergence of these vulnerabilities highlights a systemic risk inherent in the interconnected architecture of the Model Context Protocol. While each flaw is dangerous on its own, their true potential for damage is realized when different MCP servers, such as the Git and Filesystem servers, are used in conjunction. This interoperability, designed for functionality, can inadvertently create an amplified attack surface where a weakness in one component can be used to compromise another, ultimately leading to a full system compromise. Any organization running mcp-server-git versions prior to 2025.12.18 is considered highly vulnerable. The risk is particularly acute for developers and organizations utilizing modern AI-powered Integrated Development Environments (IDEs) like Cursor, Windsurf, and GitHub Copilot. These platforms often run multiple MCP servers simultaneously to provide a rich, context-aware coding experience, but in doing so, they expand the potential vectors for exploitation. Users of applications like Claude Desktop that feature Git integration should treat software updates as a top priority to prevent potential attacks.

To address these critical threats, immediate and decisive action is required. The primary mitigation is to upgrade mcp-server-git to the patched version, 2025.12.18, or any later release that contains the necessary security fixes. Beyond this essential step, organizations should conduct a thorough audit of their MCP server integrations, paying close attention to environments where Git and Filesystem servers operate together. System administrators are advised to implement robust monitoring to detect the creation of unexpected .git directories in unusual or sensitive locations, as this can be an indicator of compromise. Furthermore, applying the principle of least privilege to MCP server processes can help contain the potential damage from a successful exploit. Looking forward, this incident serves as a crucial reminder that the security of agentic AI systems depends on rigorous input validation at every integration point. Downstream tools and applications that consume MCP services should implement their own layers of sanitization as a defense-in-depth measure against this new class of vulnerabilities.

A New Frontier in Cybersecurity

This series of vulnerabilities served as a stark illustration of the novel attack vectors introduced by agentic AI systems. The discovery underscored that as AI agents gain more autonomous capabilities to interact with filesystems, execute code, and access external tools, traditional security models focused on direct user-initiated threats become insufficient. The incident prompted a necessary re-evaluation of threat models to account for LLM-driven decision-making and tool invocation as a potential vector for compromise. It became clear that the security of the entire AI ecosystem depends not just on the robustness of the models themselves, but on the rigorous validation and sanitization of every piece of data they process and every action they are instructed to take. The path forward required a paradigm shift toward a zero-trust approach for AI agents, treating their outputs and tool requests with the same level of scrutiny as any other untrusted user input.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later