Rupert Marais is a leading security specialist who has spent years mastering the intricate intersection of endpoint protection, network management, and large-scale cybersecurity strategy. With a deep focus on how emerging technologies can fortify enterprise environments, Rupert brings a wealth of experience in managing high-stakes digital landscapes. Today, he shares his expertise on the evolution of threat hunting, specifically focusing on how massive institutions are utilizing AI-driven digital fingerprints and twins to stay ahead of sophisticated adversaries.
Throughout this conversation, we explore the complexities of monitoring hundreds of thousands of users and the vast data footprints they leave behind. We discuss the transition from basic anomaly detection to high-fidelity behavioral modeling, the critical role of external context in reducing false positives, and the challenges of securing autonomous AI agents. Rupert also provides a look at the technical infrastructure required to scale these systems globally and offers his vision for the future of AI-driven defense.
Managing security for over 320,000 employees across thousands of applications creates an enormous data load. How do you differentiate between harmless user anomalies and actual malicious intent, and what specific metrics do you use to measure the accuracy of these initial behavioral flags?
Distinguishing between a worker who is simply having a productive, late-night burst of energy and a malicious actor requires a deep dive into “casual and cognitive” behavior. We don’t just look at a single login; we look at the deviation from a long-term baseline to rate the potential maliciousness of the action on a granular scale. For instance, in an environment with over 6,000 applications, we use AI to triage mountains of user logs that would be impossible for a human to process manually. Our primary metric for success is the reduction of false-positive alerts, ensuring that when an analyst receives a flag, it represents a verified risk rather than a benign change in routine. This allows our team to focus their emotional and intellectual energy on high-stakes investigations rather than drowning in noise.
Digital fingerprints analyze the casual and cognitive habits of employees to establish a behavioral baseline. What specific data points are most critical for building these profiles, and how do you ensure the system adapts when a user’s job responsibilities or work patterns naturally evolve?
The most critical data points involve the intersection of where a user shops, what tasks they perform daily, and even the “flavor” of their digital interactions—much like an advertising profile but applied to security. We track these patterns across thousands of applications to understand what “normal” looks like for each specific role within the bank’s global workforce. If an employee moves from a back-office role to a high-access trading desk, the system doesn’t just trigger an alarm; it incorporates real-time data to update the digital twin. This adaptability prevents the system from becoming a static, rigid gatekeeper and instead makes it a fluid partner that understands professional growth and shifting schedules.
Digital twins can simulate behavioral patterns over time while factoring in external variables like geopolitical shifts or major weather events. How does this simulation process help reduce false-positive alerts, and could you walk through a scenario where external context completely changed a risk assessment?
Digital twins are revolutionary because they don’t just look at the “what,” but also the “why” by factoring in real-world stressors that influence human behavior. Imagine a scenario where a major storm hits a regional hub, causing employees to log in at strange hours or from unusual backup locations; a traditional system might flag 19,000 users as suspicious simultaneously. The digital twin, however, ingests that weather data and adjusts the risk threshold, recognizing that the anomaly is a response to an external event rather than a coordinated breach. This layer of context is what allows us to project behavioral patterns over time, ensuring that our defense models are grounded in reality rather than just abstract code.
Beyond human employees, thousands of applications and autonomous AI agents now operate within complex enterprise environments. What are the unique challenges of creating digital twins for these non-human agents, and what prescriptive steps are necessary to mitigate damage once a malicious agent is identified?
The challenge with non-human agents is their sheer speed and the fact that their “behavior” is often dictated by complex algorithms rather than predictable human habits. When we build twins for these 6,000+ applications, we have to account for automated spikes in data and machine-to-machine communications that look nothing like human activity. Once a malicious agent is identified, the system must move beyond detection and into prescriptive mitigation, such as instantly isolating the application or revoking its credentials. It’s a high-stakes game where the goal is to contain the damage before the rogue agent can propagate through the entire network, and the AI must act with a decisiveness that matches the speed of the threat.
Scaling a high-tech threat-hunting system from a pilot group of 19,000 users to a global workforce of hundreds of thousands is a significant undertaking. What infrastructure requirements are most demanding during this transition, and how do you maintain system performance under such a massive data influx?
Moving from a pilot of 19,000 to over 320,000 users requires an architectural backbone capable of processing petabytes of log data without latency. The most demanding aspect is the real-time synchronization between the digital fingerprints—which record history—and the digital twins, which simulate future risks. We need massive computational muscle to ensure that the AI can rate an anomaly and provide a recommendation to an analyst in seconds, not hours. Maintaining performance means constantly refining our machine learning models so they don’t become bloated, ensuring the system remains lean and fast enough to catch an attacker trying to hide in the massive crowd of legitimate traffic.
What is your forecast for the use of AI digital twins in cybersecurity over the next decade?
I believe that over the next ten years, digital twins will move from being a specialized tool for the largest banks to the foundational standard for all enterprise security. We will see these twins become increasingly autonomous, not just flagging risks for human review but proactively reshaping network perimeters in real-time based on simulated attack paths. As geopolitical and environmental volatility increases, the ability of a security system to understand the context of the world outside its servers will be the deciding factor in who stays protected. Eventually, the distinction between “threat hunting” and “system simulation” will disappear entirely, creating a self-healing digital environment that anticipates breaches before they even begin.
