The silent, methodical reconnaissance of corporate artificial intelligence systems has officially begun, marked by nearly 100,000 automated probes mapping the next generation of digital battlegrounds. These are not random digital noise or isolated incidents; they represent a concerted effort by threat actors to understand and itemize the burgeoning AI attack surface. This analysis will dissect the nature of these campaigns and, more importantly, detail the crucial security best practices required to defend against this evolving threat.
The New AI Battlefield: Understanding the Emerging Threat
A recent investigation uncovered two distinct campaigns that collectively launched nearly 100,000 probing attempts against corporate AI and Large Language Model services. This activity signals a strategic shift, where adversaries are moving beyond theoretical exploits to actively map live, public-facing AI deployments. The goal is clear: to identify vulnerabilities, understand configurations, and prepare for more disruptive attacks in the near future.
This escalation demands an immediate and practical response from security teams. The era of treating AI security as a future problem is over. The insights gained from observing these reconnaissance efforts provide a unique opportunity to build a proactive defense. The following best practices are derived directly from the tactics, techniques, and procedures used in these real-world campaigns, offering a clear roadmap for securing your AI infrastructure.
Why These Probes Are a Major Concern for Your Business
Reconnaissance campaigns of this magnitude are almost always precursors to more significant and malicious activity. By methodically probing endpoints, attackers are gathering the intelligence needed to launch sophisticated attacks tailored to specific systems. These initial, seemingly harmless queries are the foundation upon which damaging cyberattacks are built, making them a critical early warning sign.
The risks associated with a successful follow-up attack are severe and multifaceted. They include the exfiltration of sensitive proprietary data fed into the models, the exploitation of underlying server infrastructure through vulnerabilities like Server-Side Request Forgery (SSRF), and the insidious threat of model poisoning to corrupt AI outputs. Furthermore, simply being identified and “mapped” during these probes places an organization on a threat actor’s curated list of potential targets, significantly increasing the likelihood of future attacks.
Actionable Best Practices to Secure Your AI Endpoints
Defending against this new wave of threats requires a multi-layered security strategy that moves beyond conventional endpoint protection. The essential defensive measures detailed below are designed to be clear and actionable, allowing security teams to implement them immediately. Each practice is directly informed by the attacker behaviors observed during the recent reconnaissance campaigns, providing a defense that is both relevant and effective.
Proactively Block Attacker Infrastructure
A foundational step in any robust defense is to deny attackers their operational tools and platforms. In the context of AI probes, this means actively blocking the infrastructure they rely on to validate exploits. Many of the observed attacks leveraged Out-of-Band Application Security Testing (OAST) services, which are external systems used to confirm if a vulnerability, such as Server-Side Request Forgery (SSRF), can successfully make an outbound connection from the target’s network.
Blocking known malicious IP addresses and domains associated with these OAST callbacks effectively cuts off the attacker’s confirmation channel. Without this feedback loop, they cannot easily verify the success of their exploit attempts, which can deter less sophisticated actors and complicate the operations of more advanced ones. This proactive blocking creates an immediate and tangible barrier against a common exploitation vector.
Learning from the Campaigns: Blocking Identified IPs and Domains
The two campaigns detected provided specific, actionable intelligence. Researchers identified the exact OAST domains and IP addresses that the attackers used for their callback validation. By adding these known indicators of compromise (IOCs) to blocklists at the firewall or web application firewall (WAF) level, organizations can create an immediate layer of defense tailored to the current threat landscape. This reactive measure, based on fresh intelligence, serves as a powerful and precise counter to the tools being actively used in the wild.
Control Outbound Traffic and Suspicious Activity
An effective security posture involves controlling not only what comes into your network but also what goes out. Implementing a strategy of egress filtering is crucial, as it prevents servers from making unauthorized outbound connections to attacker-controlled infrastructure. This is particularly effective against SSRF attacks, where the exploit’s goal is to force the server to “phone home” to an external endpoint.
In addition to filtering, monitoring and rate-limiting traffic from suspicious Autonomous System Numbers (ASNs) provides another layer of control. Analysis of the recent probe campaigns revealed that a significant portion of the attack traffic originated from a small number of ASNs. By identifying these hotspots and applying stricter traffic rules, security teams can significantly reduce their exposure to known sources of malicious activity.
Case Study: Neutralizing SSRF and Rate-Limiting Attack Hotspots
Consider a scenario where an attacker successfully exploits an SSRF vulnerability in an LLM endpoint. In an unfiltered environment, the compromised server would make a connection back to the attacker’s OAST server, confirming the vulnerability. However, with proper egress filtering in place that only allows outbound connections to a pre-approved list of destinations, that callback is blocked. The exploit is effectively neutralized because the attacker receives no confirmation of its success. This tactic, when combined with rate-limiting prominent attack sources like AS152194, AS210558, and AS51396, creates a formidable defense against these specific campaigns.
Detect Reconnaissance and Fingerprinting Patterns
Adversaries rely on stealth to conduct their initial reconnaissance, often using subtle techniques designed to fly under the radar of traditional security tools. Therefore, it is essential to implement monitoring and alerting specifically for patterns indicative of AI endpoint reconnaissance. This includes looking for behaviors like rapid-fire requests from a single source hitting multiple different model endpoints in a short period.
Another key pattern to watch for is the use of seemingly innocuous queries. Attackers often send simple, non-threatening prompts to an API to see what kind of model responds without triggering security alerts. This technique, known as fingerprinting, helps them identify the underlying AI technology and version in use, which in turn allows them to select the most effective exploits for a future attack.
Identifying the Methodical Probe Campaign
The second, more malicious campaign observed provides a perfect example of this technique in action. It systematically probed over 73 different LLM endpoints, including those for OpenAI, Gemini, and Llama, using simple, generic queries. The goal was not to exploit but to enumerate, creating a map of which organizations were running which models. Alerting on this type of methodical, wide-ranging query pattern from a single source is a critical step in detecting a dedicated adversary in the earliest stages of an attack.
Employ Advanced Network Fingerprinting and DNS Security
To counter more sophisticated adversaries, security teams must adopt advanced defensive tactics. One highly effective strategy is to block known OAST services at the DNS level. By preventing internal systems from resolving the domain names of these callback services, an organization can completely sever the channel attackers use for exploit validation, rendering a whole class of OAST-based attacks ineffective.
Another advanced technique involves monitoring JA4+ network fingerprints. These signatures capture the unique characteristics of the client-side software used to initiate a connection, such as the specific cryptographic libraries and TLS/SSL settings. Because attacker tooling and automation scripts often have a consistent and identifiable JA4+ fingerprint, this method allows security teams to track and block the tool itself, even as the attacker rotates through different IP addresses.
Unmasking Attacker Tools with JA4+ Signatures
Imagine an attacker launching probes from hundreds of different IP addresses to evade traditional IP-based blocking. While the source address changes, the underlying automated script or tool they use to launch the attack often remains the same. This tool will likely produce a consistent JA4+ fingerprint with every connection it makes. By monitoring for and blocking these specific network fingerprints, security teams can effectively unmask and neutralize the attacker’s automation, providing a far more resilient defense than one based on ephemeral IP addresses alone.
The Final Verdict: Moving AI Security from Theory to Practice
The discovery of these 100,000 methodical probes served as a definitive signal that the era of theoretical AI threats was over. Malicious actors are no longer just discussing possibilities; they are actively mapping the infrastructure for future exploitation. Any organization deploying public-facing Large Language Models must now operate under the assumption that they are a target. Implementing robust security measures like proactive infrastructure blocking, egress filtering, and advanced network fingerprinting is no longer optional. These practices represent the new baseline for protecting an organization’s expanding and increasingly critical AI infrastructure.
