New PCPJack Framework Targets Cloud and AI Credentials

New PCPJack Framework Targets Cloud and AI Credentials

Rupert Marais has spent the better part of his career dissecting the evolution of endpoint vulnerabilities and the increasingly sophisticated ways threat actors pivot through corporate networks. As a specialist in cloud security strategies and network management, he has observed the transition from simple automated scripts to complex, modular frameworks designed to live and breathe within elastic environments. Today, he joins us to break down the mechanics of PCPJack, a newly discovered credential theft framework that represents a significant shift in how attackers exploit cloud-native services like Kubernetes and Redis. This discussion delves into the competitive nature of modern cybercrime, the exploitation of public web archives for targeting, and the technical persistence mechanisms that make cloud-based worms particularly difficult to eradicate.

When cloud-native services like Kubernetes or Redis are exposed, how does a modular framework systematically move across these environments? What specific steps should security teams take to disrupt this lateral progression before it reaches sensitive developer or financial assets?

The movement we see with a tool like PCPJack is highly methodical, relying on a series of Python-based modules that act as specialized components of a larger machine. Once an initial foothold is established, the framework utilizes a script specifically designed for reconnaissance and secret harvesting, which allows it to pivot across SSH, Docker, and Kubernetes services. It essentially acts as a worm, seeking out additional vulnerable hosts by scanning for misconfigured ports or exploiting a chain of five specific vulnerabilities, including CVE-2025-55182 and CVE-2025-48703. To disrupt this, security teams must move beyond basic perimeter defense and implement strict micro-segmentation to ensure that a compromise in a development container doesn’t provide a straight path to financial databases. Monitoring for unusual internal port scanning and enforcing the principle of least privilege on service accounts are the most effective ways to break the automated chain of infection before the “lateral.py” module can identify the next target.

In scenarios where a new threat actor actively evicts a rival group’s artifacts while focusing on credential theft over cryptocurrency mining, what does this suggest about their long-term operational goals? How does this competitive dynamic between rival groups complicate the incident response process?

This aggressive eviction of TeamPCP artifacts, confirmed by the “PCP replaced” success metric sent back to the command-and-control server, indicates a shift toward high-value long-term access rather than quick, noisy profits from mining. By removing rival miners, the PCPJack operators reduce the system’s resource load, making the infection less likely to be detected by standard performance monitoring tools. Their focus on stealing credentials for platforms like OpenAI, Anthropic, and HashiCorp Vault suggests they are looking to facilitate downstream attacks such as corporate espionage, extortion, or the resale of access to specialized AI environments. For an incident responder, this rivalry is a nightmare because the presence of two different sets of indicators can lead to a fragmented understanding of the breach. You might successfully clean up the noisy mining artifacts from the first group while completely missing the silent, modular credential stealer that has already established persistence and encrypted your secrets for exfiltration.

Several distinct security vulnerabilities are often chained together to spread malware across cloud systems. How can organizations better prioritize patching for these types of flaws, and what are the specific risks of leaving endpoints like RayML or MongoDB misconfigured?

Prioritization must be driven by the “wormability” of the flaw rather than just a generic severity score, especially when we see frameworks like this one targeting a very specific set of five CVEs to automate its spread. If a vulnerability allows for unauthenticated remote code execution in a common cloud service, it should be at the absolute top of the list because frameworks like PCPJack are built to exploit these automatically. Leaving services like RayML or MongoDB exposed is particularly dangerous because these environments often hold the “keys to the kingdom,” such as API secrets or sensitive datasets used for machine learning. When these endpoints are misconfigured, the framework doesn’t even need to use an exploit; it simply walks through the open door, harvests the credentials, and uses the “cloud_scan.py” module to find the next victim. It creates a domino effect where one small oversight in a non-production environment leads to a total compromise of the cloud identity infrastructure.

Threat actors frequently utilize Telegram for command-and-control while targeting Instance Metadata Service (IMDS) endpoints for sensitive keys. What monitoring techniques can detect this unauthorized outbound traffic, and how do automated tools typically categorize and encrypt stolen credentials before they are exfiltrated?

Detecting this traffic requires a deep look at egress patterns, specifically targeting calls to the IMDS IP address (169.254.169.254) and outbound connections to the Telegram API. In this specific campaign, the “parser.py” utility acts as a sophisticated sorter, identifying and categorizing stolen keys for services like Digital Ocean, Google API, and Discord to make the data more actionable for the attacker. Before the data leaves the network, the “crypto_util.py” module applies encryption to the payload, which helps bypass basic data loss prevention tools that look for plaintext secrets. To catch this, administrators should implement SSL/TLS inspection where possible and alert on any process other than an authorized cloud management tool attempting to access the IMDS endpoint. When you see a Python script or a shell script making these calls, it is a definitive sign that a credential harvesting operation is currently in progress.

When attackers leverage public datasets like Common Crawl to identify and target vulnerable infrastructure, how does this change the typical threat landscape? What strategies can cloud providers use to prevent their IP ranges from being systematically scanned and targeted through these public archives?

The use of Common Crawl datasets is a clever way for attackers to outsource the heavy lifting of reconnaissance to a legitimate non-profit entity. Instead of blindly scanning the entire internet, which is noisy and often blocked, they pull parquet files from public archives to identify pre-existing lists of active web infrastructure. This means the attack is already “warm” before the first packet is even sent to your network, as the “cloud_ranges.py” script refreshes these IP targets every 24 hours to stay current with AWS, Azure, and Google Cloud deployments. Cloud providers can help by working more closely with these public archives to mask certain metadata, but the primary responsibility falls on the user to ensure their instances aren’t indexed in a vulnerable state. Organizations should use tools to scan their own public-facing footprint to see exactly what a service like Common Crawl is seeing and close those gaps before a modular framework pulls that data.

A multi-stage attack often involves bootstrap shell scripts that install modular Python payloads and Sliver binaries. How do these scripts manage to establish long-term persistence, and what specific indicators of compromise should administrators look for when auditing their container and host environments?

Persistence is achieved through a multi-stage “bootstrap” process where the initial shell script prepares the environment, downloads the secondary Python scripts, and fetches an architecture-specific Sliver binary. These scripts often masquerade as legitimate system monitoring tools—for example, the main “worm.py” orchestrator is often written to the disk as “monitor.py” to hide in plain sight. During an audit, administrators should look for unexpected Python processes running with high privileges, particularly those making external network connections, and check for the presence of hidden files or unusual cron jobs created by the bootstrap script. The presence of the “check.sh” script is a major red flag, as it is used to detect the CPU architecture before pulling the final payload. You should also watch for the sudden disappearance of known mining processes, as this framework’s habit of deleting TeamPCP artifacts is a unique behavioral indicator that something more sophisticated has taken over the host.

What is your forecast for the evolution of cloud-specific credential theft frameworks?

I expect we will see a rapid evolution toward “cross-cloud” lateral movement, where frameworks become even more adept at jumping between different providers like AWS and Azure using stolen IAM roles. As organizations consolidate their operations into multi-cloud environments, these tools will likely incorporate more advanced identity-based attacks that can bypass multi-factor authentication by hijacking active sessions or tokens harvested directly from container memory. We will also see a higher degree of automation in how these tools monetize stolen access, perhaps by automatically spinning up new, high-cost instances for their own use or selling access to the highest bidder in real-time. The era of the simple, noisy crypto-miner is ending, and we are entering a phase where silent, modular credential stealers will be the primary threat to cloud integrity. Defense will have to move closer to the identity layer, treating every internal service request with the same level of suspicion as an external one.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later