A recent poll of cybersecurity professionals has solidified a stark reality for the industry, revealing a dramatic consensus that the nature of cyber warfare is fundamentally changing. The rapid and often insecure integration of autonomous technologies has created an unprecedented attack surface, pushing familiar challenges to the background and introducing a powerful new class of adversary. Industry leaders now anticipate that agentic artificial intelligence—AI systems capable of independent reasoning and action—is not merely another threat to monitor but has become the primary target for cybercriminals and sophisticated nation-state actors. This sharp focus on futuristic threats, however, is shadowed by a persistent and troubling lack of confidence in the industry’s ability to master fundamental security practices, creating a dangerous gap between technological ambition and foundational readiness.
This emerging landscape presents a profound dichotomy: organizations are eagerly embracing highly complex, autonomous AI systems in a fervent quest for efficiency and innovation, yet they continue to grapple with long-standing vulnerabilities like weak password hygiene and a systemic failure to elevate cybersecurity to a top-tier board-level priority. This chasm between the adoption of next-generation technology and the neglect of basic security principles is cultivating the perfect storm for a new generation of sophisticated, AI-driven cyberattacks. The consensus is clear—the digital front line has moved, and the new apex predator is an intelligent, autonomous agent operating deep within enterprise networks.
The New Apex Predator: Agentic AI as the Top Attack Surface
The Unprecedented Risk of Autonomous Systems
The prediction that agentic AI will become the premier attack vector, a view shared by an overwhelming 48% of experts polled, reflects a deep and pervasive anxiety throughout the cybersecurity industry. This concern is not rooted in science fiction but in the tangible risks emerging from current deployment practices. Rik Turner, Chief Analyst at Omdia, underscores this danger, warning that the frenetic rush to integrate these advanced systems is causing developers to prioritize speed over security, often resulting in the deployment of insecure code and unvetted third-party components. The fundamental threat emanates from the unique combination of an agent’s operational autonomy and the high-level system permissions it requires to perform its functions. Unlike traditional software, these agents can make independent decisions and take actions across a network, creating a potent and attractive target for exploitation by adversaries seeking to cause maximum disruption with minimal effort.
This risk is compounded by development trends that favor rapid iteration over robust security validation. The practice of incorporating unvetted open source model context protocol (MCP) servers to meet aggressive deadlines has become increasingly common, a carryover from the “vibe coding” trend that gained traction in 2025. This approach suggests that a significant amount of insecure and vulnerable infrastructure is already being constructed and integrated into core business processes. As a result, organizations are inadvertently building a fragile foundation for their most advanced technological initiatives. The very agents designed to streamline operations and accelerate development could become the conduits for catastrophic breaches, turning a strategic asset into a critical liability that security teams are ill-equipped to manage. The autonomy that makes these systems so valuable also makes them exceptionally dangerous when compromised.
The Proliferation of Non-Human Identities
The relentless corporate drive for greater productivity is fueling the widespread adoption of agentic AI, but this pursuit of efficiency is simultaneously causing the digital attack surface to expand at an exponential rate. Melinda Marks, Practice Director at Omdia, clarifies that as organizations leverage AI to achieve massive productivity gains—often on the order of five to ten times previous benchmarks—they are also creating a vast and complex new network of non-human identities (NHIs). Each of these autonomous agents, from chatbots processing customer data to AI systems managing supply chains, requires system access and credentials that can be targeted and exploited. This proliferation of NHIs fundamentally alters the security paradigm, forcing defenders to secure thousands of autonomous entities in addition to their human workforce. The stakes are amplified as attackers also leverage AI to launch more sophisticated, scaled attacks to probe for vulnerabilities, creating a high-stakes arms race where the digital terrain is constantly shifting.
The security challenges posed by this explosion of non-human identities are unique and formidable. Unlike human users, NHIs operate 24/7, can process information at machine speed, and often possess broad, elevated privileges to interact with sensitive systems and data across an organization. Their behavior lacks the predictable patterns of human activity, making it significantly more difficult for traditional security tools to detect anomalous or malicious actions. A compromised NHI can therefore operate undetected for longer periods, exfiltrating data, disrupting operations, or moving laterally through a network with a speed and stealth that a human attacker could never achieve. This reality demands a fundamental rethinking of identity and access management, moving beyond user-centric models to a framework that can govern and secure a hybrid workforce of both humans and intelligent machines, a challenge many organizations are only now beginning to comprehend.
A Critical Blind Spot: Securing Access, Not Just the Model
While many organizational security efforts are focused on protecting the AI models themselves through measures like prompt injection prevention and data poisoning defenses, some experts argue this approach constitutes a critical and dangerous blind spot. Geoffrey Mattson, CEO of SecureAuth, forcefully contends that the true battleground for AI security is not found within the large language model’s architecture but in controlling what a compromised AI agent can ultimately access. He posits that attempts to build completely foolproof AI safety features are insufficient, as determined adversaries will inevitably find ways to circumvent them. Instead of trying to secure the agent, the focus must shift to rigorously securing the resources it connects to. In a memorable critique of current strategies, Mattson states, “You can’t LLM your way out of an LLM problem,” advocating for a paradigm shift away from model-centric security toward robust, continuous authorization for every single action an agent takes.
This proposed shift involves reimagining the enterprise AI control plane. Rather than relying on the AI’s internal safeguards, this model enforces security externally by applying strict, zero-trust principles to every interaction the agent has with data, applications, and infrastructure. This means moving beyond initial authentication to a system of continuous authorization, where the agent’s permissions are constantly verified based on context, risk, and the principle of least privilege. Under this framework, even if an agent is successfully compromised, the potential damage is severely limited because its ability to access sensitive information or execute high-risk commands is constrained by an external, unwavering security layer. This approach acknowledges the fallibility of AI models and places the security emphasis on a more controllable and reliable domain: access control. It transforms the security challenge from an intractable problem of predicting AI behavior to a manageable one of enforcing strict, policy-based access rules.
Evolving Tactics and Enduring Challenges
The Rise of Hyper-Realistic Social Engineering
The second most-cited threat, commanding nearly a third of expert concern at 29%, is the weaponization of deepfakes as the primary social engineering vector for high-value targets. Once a niche and technically demanding technology, deepfakes have now entered the mainstream, with the proliferation of AI-generated content inevitably leading to the production of hyper-realistic fabrications that can deceive even discerning individuals. Recent incidents, such as the audacious $25 million deepfake scam in Hong Kong where a finance worker was tricked by a video call featuring a fabricated CFO, serve as a stark prelude to their devastating potential. These events have moved deepfakes from a theoretical risk to a proven and effective tool for sophisticated fraud. The normalization of these tactics in state-sponsored campaigns, including efforts to use fake remote workers to generate revenue, further demonstrates their growing role in the global threat landscape.
In response to this escalating threat, the corporate security posture is undergoing a significant strategic shift. Recognizing the increasing difficulty of preventing a convincing deepfake from reaching its target, many organizations are moving away from a prevention-centric model and toward a strategy emphasizing rapid detection and response. This pragmatic approach acknowledges that some attacks will inevitably bypass initial defenses. Consequently, the focus is now on implementing technologies that can quickly identify fabricated media, training employees to recognize the subtle cues of synthetic content, and establishing clear protocols for verifying high-stakes requests received through digital channels. While foundational practices like network visibility and policy controls remain important, there is a growing consensus that the ability to respond swiftly and decisively to a successful deepfake incident is paramount to mitigating financial and reputational damage.
The Boardroom Disconnect: Is Cyber-Risk a True Priority?
A mere 13% of survey respondents expressed confidence that corporate boards would treat cyber-risk as a Tier 1 operational priority, a figure that some industry leaders find both concerning and overly optimistic. This finding highlights a persistent and dangerous disconnect between the security community and top-level corporate governance. Amy Worley, a leader at BRG, argues that boards continue to significantly underrate the systemic risk posed by cyber threats, particularly those associated with the deployment of agentic AI. Because these autonomous systems are designed to operate with minimal human oversight, she warns that “small errors or malicious injections can balloon into large security events” before anyone is even aware of a problem. The elevated privileges and decision-making capabilities of these agents create a unique and critical security risk that demands proactive engagement from the highest levels of an organization.
This imminent threat, however, also presents a crucial opportunity for boards to evolve their approach to risk management. The rise of agentic AI should serve as a catalyst for implementing specific, forward-looking safety measures that require both foresight and dedicated budgetary allocation. Instead of viewing cybersecurity as a purely technical issue delegated to the IT department, boards must integrate it into their core strategic planning. This involves asking critical questions about the security implications of new technologies before they are adopted, demanding clear metrics on cyber-readiness, and ensuring that the organization’s risk appetite aligns with its security investments. The reality remains that even traditional cyberattacks, leading to system outages and data loss, are significant operational concerns that warrant board-level attention. The failure to address these foundational risks while simultaneously embracing advanced AI creates a precarious operational environment.
The Stubborn Persistence of Passwords
Finishing last in the poll, the notion that passwords will be widely eliminated in favor of more secure authentication methods like passkeys was supported by a scant 10% of respondents. Despite the significant momentum passkeys have gained through endorsements from technology giants like Microsoft and Google, the overwhelming consensus is that passwords will remain a fixture of the digital landscape for the foreseeable future. Rik Turner aligns with the 90% of respondents who view the complete eradication of passwords as an unlikely short-term outcome. Rather than fading into obsolescence like telex machines, passwords are now widely seen as a “hardy perennial” of the security world—a legacy system that is too deeply embedded in countless applications, systems, and user habits to be easily or quickly removed. This persistence is not due to a lack of better alternatives but to the sheer inertia of existing infrastructure.
This failure to move beyond passwords brings the central tension of the modern security landscape into sharp focus. Adam Etherington, a Practice Leader at Omdia, connects this foundational weakness directly back to the primary threat of agentic AI. He points out that major enterprise software platforms from vendors like SAP and Oracle already possess agentic capabilities that rely on API connectors and non-human identities to integrate complex business solutions. IT and security teams are already scrambling to secure these vectors, and their struggle is compounded by the persistence of weak authentication methods. The fact that CISOs often rank email security and staff awareness training—two areas intimately tied to password hygiene—as their lowest priorities reveals a critical blind spot. This neglect of basic security fundamentals in favor of focusing on more advanced threats creates a significant and unnecessary risk that will only be magnified as more powerful and autonomous AI systems are woven into the fabric of daily business operations.
