In the rapidly evolving landscape of cybersecurity, the old rulebook is being thrown out. For years, we’ve been taught that managing vulnerabilities is the key to security, but a widening chasm has opened between this theory and the reality of how attackers operate. To bridge this gap, we sat down with Rupert Marais, an in-house security specialist with deep expertise in adversary tactics and defensive strategies. We explored the paradigm shift toward a threat-led defense, discussing how it moves organizations from a passive, reactive posture to a dynamic, proactive one by focusing on attacker behaviors rather than a simple checklist of weaknesses. Our conversation delved into the practical steps for escaping the “Exposure Trap,” the critical importance of cross-team collaboration, and how to validate that security controls actually stand up to real-world attack techniques.
You describe a “widening gap” between traditional security models and the behavioral reality of adversaries. Can you share a specific example of how this gap leads to a breach and walk us through how a threat-led approach would have altered that defensive outcome?
Absolutely, this is something we see constantly. Imagine a financial services firm that’s hyper-focused on its vulnerability management program. Their dashboard is green, they’ve patched every critical CVE, and they feel secure. However, a threat actor targeting them doesn’t even bother looking for an unpatched server. Instead, their intelligence shows that this firm’s employees are active on professional networking sites. The attacker uses AI-generated phishing to steal the credentials of a mid-level accountant. From there, they use legitimate, built-in system tools—a behavior, not a vulnerability—to move laterally and access sensitive data. The traditional model missed this completely because no vulnerability was exploited. A threat-led approach would have started differently. It would have identified that adversaries targeting their sector heavily rely on credential theft and living-off-the-land techniques. The defensive strategy would then prioritize stronger identity controls, multifactor authentication everywhere, and granular detection rules for abnormal use of system tools like PowerShell, effectively shutting down the attacker’s preferred path before they could even get started.
The article introduces the “Exposure Trap,” where organizations rely on enumerating vulnerabilities. Beyond unpatched software, what are the top procedural weaknesses you see attackers exploit that scanners miss? Please share some metrics or anecdotes that illustrate the scale of this problem for organizations.
The “Exposure Trap” is a dangerous comfort zone for many organizations. Scanners are great at finding known software flaws, but they are blind to procedural and configuration weaknesses that attackers love. The most common one we see is identity misuse. This isn’t just about stolen passwords; it’s about overly permissive access controls, service accounts with privileges that are never audited, and a lack of monitoring for how those credentials are used. Another huge one is control gaps in the security stack itself, where one tool doesn’t properly hand off to another, creating a blind spot. The recent 2025 Threat-Led Defense Report really highlighted this, showing a trend where adversaries are blending techniques from different playbooks, making their behavior unpredictable. For instance, an attacker might use a technique typically associated with a Chinese state actor and then pivot to a procedure straight from a Russian cybercrime group’s playbook. A vulnerability scanner will never tell you that you’re susceptible to that creative, multi-faceted behavioral chain of attack. It’s this adversarial creativity that makes relying solely on exposure management such a critical failure.
You compare the shift to threat-led defense with the move to Zero Trust. For a security team just starting this journey, what are the first three concrete steps to pivot from a vulnerability-based to a behavior-based model? Please outline a practical approach they could follow.
That’s an excellent parallel, as both are fundamental shifts in mindset. For a team just starting, the path can seem daunting, but it boils down to three manageable steps. First, you must establish context. Stop trying to defend against every threat on the planet. Instead, use threat intelligence to answer the question, “Which two or three adversaries are most likely to target my industry and my organization?” Focus your energy on understanding their specific TTPs. Second, build a shared language. Get your threat intelligence analysts, detection engineers, and SOC teams in the same room—physically or virtually. Instead of the intel team throwing a report over the wall, they should be explaining the procedures an attacker uses. This breaks down silos and ensures that the people writing detections understand the “how” and “why” behind the threat. Finally, start validating. Pick one single, high-priority technique used by your primary adversary and test if your controls can actually detect or block it. Run a tabletop exercise or a small-scale adversary emulation. This makes the threat real and moves you from a theoretical risk model to an evidence-based one, creating momentum for the entire program.
The content highlights cross-team collaboration as a foundational pillar. How does a threat-led approach specifically break down the silos between a threat intelligence analyst and a detection engineer? Describe how their daily workflow and shared language would concretely change in this model.
It completely transforms their relationship from a linear handoff to a dynamic feedback loop. In a traditional, siloed environment, a threat intel analyst might produce a report saying, “Ransomware group X is using technique T1059.” They send it to the detection engineer, who then has to interpret what that means and build a generic rule. In a threat-led model, the conversation is far more granular and collaborative. The intel analyst comes to the engineer and says, “This specific adversary uses PowerShell (T1059.001) with these exact command-line arguments to disable security logging before they deploy their payload. Here is the procedural detail.” The detection engineer can then build a highly precise rule that looks for that exact behavioral sequence. When that rule fires, the alert goes to the SOC, and the feedback—whether it was a true positive or a false positive—flows right back to both the engineer and the analyst. This allows them to refine the detection and enrich the intelligence in real time. Their shared language is no longer just CVEs or IP addresses; it’s the granular, procedural detail of adversary behavior.
The text states this approach helps answer, “Do my controls actually defend against those behaviors?” What is the step-by-step process a team would use to answer that specific question, and what kind of granular, procedural intelligence is essential for that validation to be meaningful?
Answering that question is the core purpose of this model, and it requires a systematic, repeatable process. First, you select a specific adversary behavior to test based on your threat intelligence—for example, an attacker using a specific method to exfiltrate data over a non-standard port. Second, you must acquire what the article calls “in-depth procedural granularity.” It’s not enough to know they use exfiltration; you need to know how. Do they use a specific tool? Do they encrypt the data first? What commands do they run? This granular intelligence is the absolute key to a meaningful test. Third, you emulate that exact procedure in a controlled manner within your environment, which is often done through purple teaming or using an adversary emulation platform. The goal is to replicate the attacker’s actions precisely. Fourth, you observe your defensive stack. Did your EDR generate an alert? Did your data loss prevention tool block the transfer? Was the alert actionable or just noise? Finally, you analyze the results to identify gaps, refine your controls and detections, and then you repeat the entire process with the next relevant TTP. It’s a continuous loop of testing, measuring, and hardening based on real-world adversary actions.
What is your forecast for threat-led defense over the next five years? As adversaries increasingly leverage AI for evasion and reconnaissance, how will this defensive model need to adapt, and what new challenges or opportunities do you see emerging for security teams?
My forecast is that threat-led defense will cease to be a niche strategy and will become the default operating system for any mature security program. It has to. As you mentioned, adversaries are already using AI to create novel phishing lures, automate reconnaissance, and generate evasive code on the fly. This will make their behaviors far more dynamic and adaptive than we’ve ever seen. The challenge for security teams will be that defending against a static list of known TTPs will become obsolete. We’ll be facing adversaries whose procedures change from one attack to the next. However, this also presents a massive opportunity. We can leverage AI on the defensive side to model these adaptive behaviors, predict an attacker’s likely next move based on initial actions, and automate the continuous validation of our controls against these AI-driven attack variations. The future isn’t just about knowing what an adversary has done; it’s about being able to anticipate and defend against what they could do. Threat-led defense is the framework that allows us to build that predictive, adaptive defensive engine.
