Rupert Marais stands at the forefront of modern cyber defense, currently serving as an in-house Security Specialist with a profound focus on endpoint protection and large-scale network management. With years of experience navigating the shifting tides of the digital landscape, Rupert has witnessed the transition from traditional perimeter defense to the complex, AI-driven environments of today. In this conversation, he explores the dramatic surge in threat intelligence data, the necessity of building custom agentic AI tools when commercial vendors fall behind, and the critical importance of protecting the mental well-being of the next generation of cybersecurity professionals.
The volume of security signals has skyrocketed from 80 million to an incredible 400 billion per week in just six years. How does this massive influx of data redefine your hiring strategy for new graduates, and what mental health safeguards are essential to prevent burnout in such high-pressure environments?
The shift from millions to 400 billion signals a week has fundamentally changed how we bring talent into the fold. We no longer have the luxury of letting early-career IT workers spend years on a help desk; we are hiring cybersecurity graduates who walk directly into a high-pressure “firehose” environment on day one. To prevent this scale from becoming a career-killer, we focus on using AI to elevate our first-level analysts so they can access the same deep institutional knowledge as our senior experts instantly. Our goal is to “take scale off the table” by automating the monotony, ensuring that these bright minds are still working in the industry 20 years from now rather than burning out within their first 24 months.
When commercial security vendors are unable to keep pace with emerging threats, many organizations are building their own agentic AI tools. How do these internal tools reduce threat assessment times from days to minutes, and what are the essential steps for integrating legacy infrastructure with modern AI-driven defense agents?
In our experience, waiting for a vendor product to update can leave us exposed, so we built our own agentic AI tools to ingest research and analyze it against our specific data in real-time. Previously, it took our team roughly two days to assess a new threat and form a hypothesis, but our custom agents now complete that same task and prepare a report in just 30 minutes. The integration process is complex because these agents must scan everything from on-prem legacy systems to cloud-hosted workloads and SaaS environments. We bridge this gap by ensuring the AI understands the unique “footprint” of our sprawling estate, allowing it to identify indicators of compromise across both the old and the new.
Adversaries are increasingly using AI to rapidly rotate phishing lures while keeping the underlying backend code consistent. How do your automated agents detect specific AI coding artifacts across different attacks, and what metrics are used to distinguish “automation on steroids” from truly intelligent threat hunting?
We’ve observed that while the “lure” or the front-facing email changes constantly, the backend code often remains identical and frequently contains clear artifacts left behind by AI coding tools. Our automated agents are trained to look past the surface-level variations to identify these consistent patterns and AI-generated signatures in the underlying scripts. To distinguish “real AI” from simple “automation on steroids,” we measure the tool’s ability to conduct non-linear problem solving and predictive analysis rather than just executing pre-defined rules. If a tool can’t correlate a new research paper with a specific vulnerability in our unique infrastructure within minutes, it’s just basic automation, not the intelligence we require.
AI-generated red team reports are often non-deterministic, meaning they might not report the same threat twice. How do you incorporate deterministic checkpoints into these fluid AI workflows to satisfy legal requirements, and what shift in mindset is required for teams?
This was a major hurdle for us because human-authored reports are traditionally filled with repeatable evidence that satisfies our legal and compliance teams, whereas AI is naturally non-deterministic. To solve this, we had to find a way to insert deterministic “checkpoints” into the fluid AI flow, assigning fixed outcomes to specific attack patterns so the results become repeatable and predictable. This required a massive mind shift for our red teams, who had to move away from purely manual testing to managing a system that is constantly evolving. It’s about teaching the team to trust a probabilistic model while reinforcing it with enough hard data points to stand up in a court of law or a regulatory audit.
Effective AI security tools often require close collaboration between data scientists and frontline analysts. Why do initial development attempts frequently fail when these groups work separately, and what specific strategies ensure the final tool actually solves the problems?
Initially, we tried “throwing the problem over the fence” to the data scientists, but their first attempt didn’t solve the actual operational problems we faced because they lacked the “outcome” perspective. The breakthrough happened only when our frontline security staffers—the people closest to the daily struggle—sat side-by-side with the data scientists to co-create the tools. We learned that the data scientists know the AI, but the analysts know the threat; by pairing them, we ensured the resulting agent didn’t just look good on paper but actually removed the drudgery from the analyst’s day. Now, our strategy is built on the principle that the person who lives with the problem is the one best equipped to help design the solution.
What is your forecast for the future of AI-powered threat hunting?
I believe we are entering an era where the “monotony of the day” will be entirely handled by autonomous agents, forcing a total transformation of the security operations center. In the near future, every organization, regardless of size, will be forced to adopt agentic AI because cyber-criminals are already using these tools to scale their attacks to heights that human teams simply cannot monitor. My forecast is that the role of the human analyst will shift entirely away from “finding the needle” to “deciding what to do with the needle,” as AI handles the 400-billion-signal-a-week load. If you aren’t asking your teams how they are solving the problem of scale right now, you will likely find yourself overwhelmed by an automated adversary sooner than you think.
