As a seasoned specialist in endpoint security and network management, Rupert Marais has spent years navigating the complex intersection of developer productivity and organizational risk. His deep expertise in cybersecurity strategy provides a unique lens into how modern infrastructure—from CI/CD pipelines to agentic AI frameworks—is becoming increasingly saturated with exposed credentials. In this conversation, we explore the alarming acceleration of secrets sprawl and the shift toward non-human identity governance.
AI-related secret leaks have spiked by over 80% recently, particularly within LLM infrastructure and orchestration tools. How is rapid AI adoption outstripping traditional security controls, and what specific steps should teams take to secure machine identities used by these new agentic frameworks?
The velocity of AI adoption is staggering, with over 1.2 million AI-related secrets leaked in 2025 alone, representing an 81% year-over-year increase. We are seeing a massive explosion in “agentic” frameworks where tools like Firecrawl or Supabase require deep integration, often leading developers to hardcode retrieval APIs and orchestration keys just to get systems talking. To secure these, teams must first implement automated scanning specifically for AI config files, such as those used by the Model Context Protocol (MCP), which leaked over 24,000 secrets in its first year. Second, organizations must shift away from static keys toward short-lived, identity-driven access tokens that expire automatically. Finally, these machine identities must be centralized in a vaulting system rather than living in local JSON files or startup flags, ensuring that every AI agent has a clear, governed lifecycle.
Internal repositories are significantly more likely to contain hardcoded secrets than public ones, yet they often receive less scrutiny. Why does the “security through obscurity” mindset persist in private environments, and how can organizations practically transition to treating internal systems as high-risk leak sources?
There is a dangerous psychological safety net developers feel when working behind a VPN, leading to a state where 32.2% of internal repositories contain secrets compared to only 5.6% of public ones. This “security through obscurity” persists because teams assume a private perimeter is impenetrable, yet these internal repos are often gold mines of CI/CD tokens and database passwords. To transition, organizations must implement “public-by-default” security standards, treating every internal commit as if it were going to a public forum. This requires deploying automated pre-commit hooks and server-side scanning across all self-hosted GitLab and GitHub Enterprise instances to catch leaks before they are ever persisted in the git history. We have to break the illusion of the perimeter; if a secret is in an internal repo, it is essentially one compromised developer machine away from a full-scale breach.
Nearly a third of credential leaks occur in collaboration platforms like Slack and Jira rather than source code. Why are these secrets often more critical than those found in code, and what specific workflows can prevent developers from sharing sensitive tokens during troubleshooting or onboarding?
Secrets leaked in collaboration tools are often “live” administrative credentials shared in the heat of the moment, with 56.7% of them rated as critical compared to 43.7% found in code. When a system goes down, a developer might paste a production token into Slack to help a colleague troubleshoot, unwittingly creating a permanent, searchable record of that credential. To prevent this, companies need to implement automated “chat-ops” monitoring that flags and redacts sensitive patterns in real-time across platforms like Jira and Confluence. Furthermore, onboarding should be moved into managed environments where access is granted via Role-Based Access Control (RBAC) rather than by sharing a document full of static keys. Creating a culture where “sharing a key is an incident” rather than a shortcut is the only way to close this 28% visibility gap.
A majority of secrets identified years ago remain valid and exploitable today, suggesting that detection is not the same as remediation. What operational hurdles prevent companies from rotating these long-lived credentials, and how can a team build a rotation process that doesn’t break production?
It is a sobering reality that 64% of secrets verified as valid in 2022 were still exploitable four years later, largely because the fear of “breaking the build” outweighs the fear of a breach. The main hurdle is the lack of ownership; when a secret is embedded in a legacy CI/CD variable or a container image, nobody wants to pull the thread and risk a production outage. To build a resilient rotation process, teams must first map dependencies to understand exactly which services use a specific credential. Once mapped, they should implement “blue-green” secret rotation, where a new credential is provided alongside the old one, allowing for a phased transition without downtime. Automation is the final piece—if rotation isn’t automated, it simply won’t happen at the scale required by modern enterprise sprawl.
Compromised developer machines and CI/CD runners often reveal a massive aggregation of secrets across shell histories, IDE configs, and build artifacts. How has the developer endpoint become a primary target for supply chain attacks, and what strategies help minimize the secret footprint on these environments?
The developer endpoint is the new “credential aggregation layer,” as evidenced by research showing that a single live secret often appears in eight different locations on a single machine. Attackers target these machines because they provide a consolidated jumping-off point to cloud environments, with 59% of compromised systems in some attacks being CI/CD runners rather than personal laptops. To minimize this footprint, we must move toward “stateless” development environments where shell histories are purged and cached tokens have extremely short TTLs (Time-to-Live). Developers should use secret-less workflows, such as OIDC (OpenID Connect) for cloud authentication, which eliminates the need to store long-lived AWS or Azure keys in .env files or IDE configurations. By treating the developer machine as a transient tool rather than a permanent vault, we significantly reduce the “blast radius” of a local compromise.
Many organizations are moving beyond simple detection toward comprehensive non-human identity governance. What are the key pillars of managing the lifecycles of service accounts and AI agents, and how do you determine who truly owns these identities in a sprawling distributed environment?
Non-human identity (NHI) governance rests on three pillars: inventory, ownership, and least privilege. You cannot secure what you don’t know exists, so the first step is creating a unified registry of every service account, CI job, and AI agent across the entire ecosystem. Determining ownership in a distributed environment requires metadata tagging—linking every created identity to a specific team or project at the moment of creation. We must move to a model where identities are treated like ephemeral assets with a defined “sunset” date; if an AI agent hasn’t made an API call in 30 days, its credentials should be automatically revoked. This lifecycle management ensures that our digital environment doesn’t become cluttered with “zombie” identities that provide silent backdoors for attackers.
What is your forecast for secrets sprawl?
I predict that secrets sprawl will continue to outpace developer growth, likely reaching a point where 100 million new secrets are exposed annually by 2028 if current trends hold. As we move deeper into the era of agentic AI, the sheer volume of “machine-to-machine” interactions will make manual secrets management humanly impossible. We will see a mandatory industry shift toward “Secretless” architectures, where the very concept of a static, hardcoded string becomes an architectural relic. Organizations that fail to adopt automated non-human identity governance will find themselves in a perpetual state of breach, as the speed of AI-driven development simply leaves no room for the slow, manual security practices of the past.
