How Is AI Transforming Security at Black Hat USA NOC?

How Is AI Transforming Security at Black Hat USA NOC?

I’m thrilled to sit down with Rupert Marais, our in-house security specialist with deep expertise in endpoint and device security, cybersecurity strategies, and network management. Today, we’re diving into the fascinating world of securing high-stakes tech events like Black Hat USA, where Rupert will share insights on the innovative use of AI in security operations, the unique challenges of balancing open learning with strict protection, and emerging trends in data leaks and vulnerabilities. Let’s explore how his team navigates the complex landscape of cybersecurity at one of the most intense gatherings of tech minds.

How does a team like yours approach securing a massive tech event like Black Hat USA, where attendees are actively testing hacking skills?

Securing an event like Black Hat USA is a unique beast. Our primary role at the Network Operations Center is to ensure the network is up and running smoothly while protecting it from real threats. We’ve got to secure registration systems, access points, and the countless devices connecting to the network. At the same time, we know attendees are there to learn and test skills like compromising web servers in controlled settings. Our goal is to distinguish between legitimate training activities and actual malicious behavior—say, someone targeting a payment processor or government agency, which is strictly off-limits. It’s a tightrope walk of enabling education while enforcing boundaries.

What goes into the preparation process for an event of this scale, especially since planning starts months in advance?

Preparation is everything. We start months ahead—sometimes as early as March for the following year’s event. We bring in our own ISPs, firewalls, switches, and access points to build a robust infrastructure. It’s not just about the tech; it’s about anticipating every possible scenario. We draw on lessons from other global events to refine our setup. Each conference teaches us something new about network behavior and threats, and we adapt those insights to ensure we’re ready for the unique crowd and challenges we face in Las Vegas.

I’ve heard about the concept of a ‘Black Hat positive,’ where some malicious activity is allowed. Can you unpack what that means?

Absolutely. A ‘Black Hat positive’ refers to activity that would raise red flags in a typical corporate environment but is acceptable here because it’s part of the learning experience. For instance, attendees might be in a training session on how to exploit a web server. We’ll see that activity on the network, and our systems flag it, but we don’t intervene because it’s intentional and controlled. We use AI to track patterns—making sure it’s the same group, in the same classroom, targeting the designated systems. It’s about context; we’re enabling education, not stifling it.

How do you differentiate between those acceptable training exercises and something that crosses the line into dangerous territory?

That’s where the real challenge lies. We rely on a mix of AI, machine learning, and human oversight to analyze behavior. If someone in a training session starts scanning a government agency instead of the designated target, that’s a problem. AI helps us assess whether the activity matches the expected patterns of a classroom setting or if it’s a one-off, potentially malicious act. When it’s the latter, we step in quickly. It’s not foolproof, though—sometimes trainers give clear instructions, and still, someone goes off-script. That’s when our team has to act fast to mitigate risks.

Speaking of AI, how has it transformed your security operations over recent years?

AI has been a game-changer for us, especially over the past year and a half. It’s become deeply integrated into our strategy. We use machine learning for categorizing network activity and AI for risk scoring—basically, determining how confident we are that something is benign versus malicious. It helps us prioritize threats and focus on what matters most. The challenge isn’t finding uses for AI; it’s making sure those uses are meaningful and repeatable. We’ve got a long list of potential applications, but we’re still figuring out how to maximize impact without over-relying on tech that isn’t fully mature yet.

Can you share a specific way AI has helped with managing risks during the event?

One great example is how AI assists with identifying the source of suspicious activity. During the conference, we might see hundreds of alerts for potential threats. AI helps us score those risks by cross-referencing data—like whether the activity is coming from a known training classroom or an isolated device. It can tell us, with a certain confidence level, if it’s likely just a student practicing or something more sinister. This lets us focus our manpower on the real dangers, rather than chasing down every single alert manually. It’s a huge time-saver and sharpens our response.

On the flip side, you’ve noticed AI might be contributing to some security issues, like vulnerable apps leaking data. What’s driving that trend?

That’s a worrying trend we’ve seen ramp up recently. The speed at which apps are being built today, often with AI tools, is staggering. Developers are churning out weather apps, chat apps, and fitness trackers faster than ever, but there’s often little oversight. We’re seeing sensitive data—like personal information or even organizational charts—leaking because these apps aren’t built with security in mind. AI might help create the code quickly, but it doesn’t always account for encryption or proper safeguards. It’s a double-edged sword; the same tech that helps us can also create vulnerabilities if not handled responsibly.

When you uncover serious flaws, like unencrypted chat apps or misconfigured tools, how do you handle notifying those affected?

It’s a delicate process. When we spot something like an unencrypted chat app spilling entire conversations or a misconfigured security tool on someone’s laptop exposing event logs, we prioritize tracking down the individual or organization. If they’re at the event, we try to locate them directly—sometimes that’s easier in a classroom setting than in a crowded business hall. Once we’ve identified them, we discreetly inform them of the issue. If it’s a larger entity, we’ll reach out to their IT team to ensure the fix isn’t just for the person here but for their entire network. It’s about minimizing harm while maintaining trust.

What’s your forecast for the role of AI in cybersecurity, especially in environments like tech conferences, over the next few years?

I think AI is only going to become more central to cybersecurity, especially in dynamic settings like tech conferences. We’ll likely see even better risk assessment and automation, freeing up human analysts to tackle complex threats. But I also foresee challenges—AI-driven app development will continue to outpace security measures unless there’s a cultural shift toward prioritizing secure coding practices. My hope is that we’ll find a balance where AI not only identifies threats but also helps developers build safer systems from the ground up. It’s an exciting, if unpredictable, road ahead.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later