Rupert Marais is a leading authority in the field of endpoint security and cybersecurity strategy, currently serving as an in-house specialist focused on the intersection of human behavior and digital risk. With years of experience managing complex network infrastructures, he has witnessed firsthand the shift from external perimeter defense to the internal management of human and machine identities. His expertise is particularly relevant today as organizations grapple with the surging costs of internal incidents, which have recently climbed by 20% to reach an average of nearly $20 million per company. In this conversation, we explore the evolving landscape of insider threats, the hidden dangers of “shadow AI,” and why the traditional view of employee error requires a complete strategic overhaul.
Losses from employee negligence and simple mistakes now significantly outweigh those from malicious sabotage or data theft. Why are accidental errors so much costlier than intentional attacks, and what practical steps should leadership take to mitigate the financial impact of employees simply “pressing the wrong button”?
The financial reality is staggering when you realize that negligence and simple mistakes now account for $10.3 million in average annual losses per company. This is more than double the $4.7 million lost to malicious sabotage or theft, primarily because errors are pervasive and often go unnoticed until the damage has compounded across the network. When an employee ignores an IT warning or accidentally misconfigures a cloud bucket, they aren’t trying to hide their tracks like a thief, but the scale of exposure can be massive, affecting millions of records in a single click. To mitigate this, leadership must move beyond a culture of blame and implement “defensive AI” that provides real-time, risk-aware prevention at scale. By investing in tools that offer behavioral intelligence, we can catch those “wrong buttons” before the data leaves the environment, essentially creating a safety net that accounts for human fallibility.
Shadow AI is creating invisible data loss pathways through the use of public models and AI notetakers. How do these unauthorized tools specifically bypass traditional security controls, and what strategies allow a company to embrace AI productivity without encouraging staff to use undocumented, risky workarounds?
Shadow AI is particularly dangerous because it operates in a blind spot where traditional logging and web filters often fail to identify the specific data being exfiltrated. For instance, when an employee inputs a sensitive internal document into a public model like ChatGPT to summarize it, that data is effectively “lost” to the public domain, yet a standard firewall might only see it as routine HTTPS traffic. AI notetakers are another major concern, as they often produce publicly accessible recordings and summaries that contain PII and sensitive boardroom discussions. Instead of a blanket ban—which 73% of our peers worry will only drive users toward even riskier workarounds—companies should formally adopt AI into their business strategy. This involves providing sanctioned, enterprise-grade AI tools that have built-in data governance, ensuring that productivity gains don’t come at the cost of total data exposure.
While many organizations worry about AI agents, very few currently classify them as equivalent to human insiders. What are the specific dangers of allowing these agents to access corporate systems, and how must identity-centric security change to manage both human employees and autonomous machine-driven agents?
The danger lies in the autonomy of these agents; they can access corporate systems, perform complex tasks, and bypass traditional controls that were designed for human interaction speeds. Currently, only 19% of organizations classify AI agents as equivalent to human insiders, which is a massive oversight considering these agents can be manipulated into unauthorized data disclosure. We are seeing a rise in AI browsers that enable access to malicious sites or AI-assisted torrenting, often under the guise of automated workflows. Identity-centric security must evolve into a unified framework that treats humans, service accounts, and AI agents with the same level of scrutiny. We need to implement a “human-plus-machine” risk mindset, where every action an agent takes is mapped back to a verified identity and monitored for behavioral anomalies just as we would monitor a human employee.
The average time to contain an insider incident has recently dropped from 86 days to 67 days. To what extent is behavioral intelligence responsible for this improvement, and can you walk us through a scenario where non-obvious signals identified a risk before it escalated into a major loss?
The reduction of nearly 20 days in containment time is a direct result of moving toward behavioral analysis, which 71% of organizations now rate as essential. In a typical scenario, a legacy system might not flag an employee downloading a large set of files they technically have access to, but behavioral intelligence notices the “non-obvious signal” of that activity happening at 3:00 AM from an unusual IP address. By identifying this pattern early, security teams can intervene before the files are uploaded to a personal webmail or a file-sharing site—two of the leading drivers behind the 17% increase in negligence costs. This shift from reactive to proactive monitoring allows us to catch the “outsmarted” or phished employee who is unknowingly acting as a conduit for an attack. It turns the tide from forensic cleanup to active prevention, saving millions in potential recovery costs.
Only a small fraction of global organizations have fully integrated AI governance into their risk management programs. What are the primary obstacles to formalizing these policies, and what specific metrics should a C-suite executive use to measure the success of a “human-plus-machine” risk mindset?
The primary obstacle is a lack of alignment; while 73% of practitioners are worried about AI-driven data loss, only 18% have actually integrated AI governance into their programs. Many executives struggle with the rapid pace of AI evolution, feeling that policies will be obsolete by the time they are signed, but this hesitation creates a vacuum filled by shadow AI. To measure success, a C-suite executive should look at the “mean time to detection” for undocumented AI use and the percentage of data classified under automated governance tools. They should also track the reduction in “false positives” enabled by defensive AI, which allows the security team to focus on high-risk events rather than noise. Success is defined by the seamless integration of AI agents into daily workflows—currently at 19%—without an accompanying spike in unauthorized data disclosures.
What is your forecast for insider risk management?
I predict that the line between “human error” and “machine error” will evaporate entirely as AI agents become the primary interface for corporate data. In the next few years, we will see organizations treating AI as an “operational insider,” requiring its own rigorous vetting, monitoring, and behavioral baselining. If we don’t close the gap where 44% of experts believe agents will increase theft while only a minority manage them as insiders, the cost of incidents will likely far exceed the current $19.5 million average. However, for those who embrace defensive AI and identity-centric security for all entities, we will see containment times drop even further, potentially reaching a point where most negligent errors are blocked in real-time before they can ever become a “recorded incident.”
