Do AI Agents Need Security Training Like Employees?

Do AI Agents Need Security Training Like Employees?

I’m thrilled to sit down with Rupert Marais, our in-house security specialist with deep expertise in endpoint and device security, cybersecurity strategies, and network management. With the rapid rise of AI in enterprise environments, from automating workflows to handling sensitive data, Rupert’s insights are invaluable for understanding how to secure these powerful tools. In our conversation, we dive into the evolving role of AI agents as insiders, the importance of treating them with the same rigor as human employees, the nuances of training and auditing AI systems, and the critical steps organizations must take to mitigate risks. Let’s explore how to navigate this new frontier of cybersecurity together.

How do you see AI agents taking on roles similar to human employees when it comes to handling sensitive information and systems?

AI agents are increasingly becoming like digital employees in the way they interact with sensitive data and systems. They’re not just tools anymore; they’re processing emails, automating workflows, and even making decisions on behalf of users. Just like a human worker, they have access to confidential information and third-party platforms, which means they can pose similar risks if not properly managed. The key difference is that their actions are driven by algorithms and training data, which can be exploited or misconfigured in ways humans might not be. So, we need to start viewing them as insiders with privileges, not just background software.

Why do you think so many organizations still view AI agents as simple tools rather than high-risk identities that need strict oversight?

I think it comes down to a mix of unfamiliarity and underestimating the technology. Many organizations haven’t caught up with how pervasive AI has become in their operations. They see AI as a productivity booster—something you plug in and forget about—rather than an entity with access and decision-making power. There’s also a historical tendency to treat software as less dynamic than human behavior, so the idea of applying identity-based security to AI feels foreign. But this mindset ignores how AI can be manipulated or misused, especially when it’s connected to critical systems.

What are some of the most significant dangers of not applying the same security measures to AI agents as we do to human staff?

The risks are massive. Without proper controls, an AI agent could leak sensitive data, either through a misconfiguration or by being tricked into unauthorized actions. Imagine an AI with access to customer records or financial systems being compromised— it could cause a breach just as devastating as a rogue employee. There’s also the issue of accountability; if an AI makes a harmful decision, who’s responsible? Without security measures like access limits or monitoring, you’re blind to what the AI is doing until it’s too late. It’s like hiring someone and never checking what they’re up to.

Can you share a scenario where an AI agent’s access to systems or data might result in a serious security incident?

Absolutely. Let’s say a company uses an AI agent to automate customer support responses, and it has access to a database of personal information. If that AI isn’t properly restricted or monitored, a malicious actor could exploit a flaw in its programming to extract data—think social engineering but targeting the AI with crafted inputs. The agent might unknowingly send out sensitive details or even grant access to unauthorized users. This isn’t hypothetical; we’ve seen similar vulnerabilities in chatbots and automated systems where unchecked access turned into a gateway for attackers.

You’ve talked about the need for AI agents to undergo security training similar to employees. How does that process differ for a machine compared to a person?

Training an AI agent isn’t about sitting it down for a seminar, of course, but it’s about embedding rules and boundaries into its programming. For humans, training involves education and reinforcement through real-world examples. For AI, it’s more about defining strict parameters in its model—what data it can access, what actions it can take, and how it should respond to certain inputs. This often involves fine-tuning the AI with datasets that reflect company policies and testing it to ensure it adheres to those limits. The challenge is ensuring the AI doesn’t “learn” bad habits from external data or user interactions over time.

How can organizations ensure that AI agents consistently follow policies on acceptable behavior and data handling?

It starts with clear policy integration during the AI’s development and deployment. You have to encode rules into the system—think of it as hardwiring ethical and security guidelines. Beyond that, continuous monitoring is critical. Use tools to track the AI’s actions and flag deviations from policy, much like you’d monitor employee activity. Regular updates to the AI’s training data and parameters are also key, especially as policies evolve. Finally, implementing role-based access controls ensures the AI only touches what it’s supposed to, minimizing the chance of overreach or misuse.

What are the main hurdles companies face when trying to instill security rules and boundaries in AI agents?

One big hurdle is the complexity of AI itself. Unlike a static piece of software, AI learns and adapts, which means a rule you set today might be interpreted differently tomorrow based on new data. There’s also a skills gap—many organizations don’t have the expertise to properly configure or monitor AI systems for security. Another issue is shadow AI, where employees use unapproved tools that aren’t even on the radar for training or oversight. Balancing innovation with control is tough; companies want AI to be flexible and useful, but that often clashes with locking it down for security.

Why do you believe starting an AI auditing program with a detailed inventory is so crucial for security?

An inventory is your foundation. Without knowing what AI tools are in use, where they’re deployed, and how they’re interacting with your systems, you’re flying blind. Writing everything down forces you to map out the landscape—think of it as a blueprint of potential risks. It helps identify rogue tools employees might be using without approval and exposes gaps in oversight. Starting with a list, whether it’s a simple spreadsheet or a detailed bill of materials, gives you a starting point to build controls and prioritize what needs the most attention.

What specific details should be captured in an AI inventory to make it effective for managing risks?

You need more than just a list of tools. Include details like what each AI is used for, what data it accesses, and which systems it connects to. Note who deployed it and whether it’s an internal or third-party solution. You should also document the level of oversight—does it have access controls or monitoring in place? If it’s third-party, record what you know about their security practices. The goal is to create a clear picture of exposure, so if something goes wrong, you know exactly where to look and what’s at stake.

How can conducting employee surveys help uncover AI tools that might be flying under the radar?

Employees are often the first to adopt new tools, sometimes without IT or security teams knowing. Surveys give you a direct line to understanding what’s actually being used day-to-day. Asking simple questions like, “Are you using any AI tools for work, and if so, which ones?” can reveal shadow AI that’s never been vetted. It’s a quick way to spot potential risks—maybe someone’s using a free chatbot to process sensitive data. That insight lets you bring those tools into the fold, assess them, and apply proper controls before they become a problem.

What do you mean by being intentional in AI auditing, as opposed to just checking boxes?

Being intentional means going beyond surface-level compliance. A checkbox audit is just confirming that an AI tool exists or has some basic settings in place. Intentional auditing digs deeper—understanding how the AI works, what decisions it’s making, and where its data comes from. It’s about asking tough questions: Are there biases in the training data? Could this tool expose sensitive information? It’s a mindset of curiosity and accountability, ensuring you’re not just meeting a requirement but actually reducing risk in a meaningful way.

How can audit teams gain a clear understanding of the decision-making processes within AI tools?

It’s not easy, but it starts with transparency from the AI’s developers or vendors. Audit teams need access to details about the algorithms and models driving the tool—how they’re trained and what logic they follow. Beyond that, behavioral analysis and system logs can show what the AI is doing in real time. Testing the AI with controlled inputs can also reveal patterns in its decision-making. The goal is to peel back the layers of the “black box” and see if its actions align with expectations and policies. Collaboration with data scientists often helps bridge the technical gap here.

What advice do you have for our readers who are looking to strengthen their approach to AI security and governance?

My biggest piece of advice is to start now—don’t wait for a breach to force your hand. Begin with a simple inventory of every AI tool in use, even if it’s just a rough list. Treat AI agents like human employees, with the same access controls and monitoring you’d apply to staff. Invest in training your team to understand AI risks, and don’t shy away from asking tough questions of vendors or third parties about their systems. Finally, build a culture of intentional auditing—don’t just check boxes, but really dig into how these tools work and where the vulnerabilities lie. Proactive steps today can save you a lot of pain tomorrow.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later