Why Most Security Strategies Fail And How To Fix Them

Why Most Security Strategies Fail And How To Fix Them

Security programs keep getting funded, yet breach costs keep climbing. According to the IBM Cost of a Data Breach Report 2024, the global average cost of a data breach reached USD 4.88 million. It was a 10% increase over the prior year and the largest annual spike since the pandemic, while the average time to identify and contain an incident ran to 194 days to identify and a further 64 days to contain, totaling 258 days, or roughly eight and a half months. Passing audits has not translated into resilience. It has translated into predictability for auditors and attackers alike.

The core problem is a set of incentives and mental models that reward compliance theater and incremental updates to yesterday’s defenses. Attackers target data in motion and in use, while many programs still concentrate on protecting data at rest and the network edge. The result is familiar: clean reports, dirty outcomes.

Compliance Theater And Budget Cycles

Quarterly business reviews push teams toward visible wins rather than hard changes. Hard changes involve reworking data flows, rewriting access models, and modernizing identity, all of which require cross-functional effort and political capital. It is easier to buy another tool that maps to a control family and declare progress. Audits are necessary. They are not sufficient. An audit is a smoke alarm, not a sprinkler.

Board metrics compound the problem. Many dashboards celebrate patch counts, blocked events, and average time to close tickets. Few track the one thing attackers care about: how hard it is to turn raw access into usable data. Until that shifts, budgets will cluster around familiar controls that please auditors and leave critical gaps in protection.

Outdated Investment Logic

Sunk costs keep legacy architectures in place long after their risk-adjusted value drops. Data is copied across environments because warehouse and analytics investments were designed for open access. The longer organizations cling to permissive data models, the more exceptions and compensating controls they accumulate. At some point, the exception list becomes the real policy, and that is when incidents turn into crises.

Comfort With Familiar Controls

Encryption at rest, VPNs, and firewalls are table stakes, not a shield for data in active use. If an attacker rides a valid session, steals credentials, or exploits an application flaw, encryption does not protect once data is decrypted in memory. This is why tokenization, format-preserving protection, application-level controls, and trusted execution are rising in adoption. They protect what the attacker wants, not just the route to it.

Underestimating Emerging Threats

Ransomware-as-a-service has professionalized crimeware. Initial access is commoditized. Lateral movement kits come with playbooks. On the horizon, post-quantum cryptography will force migration plans for public key infrastructure. On August 13, 2024, NIST finalized its first three post-quantum cryptography standards: 

  1. FIPS 203 (ML-KEM for key encapsulation), 

  2. FIPS 204 (ML-DSA for digital signatures), and 

  3. FIPS 205 (SLH-DSA for stateless hash-based signatures).

It marked the conclusion of an eight-year standardization process and started the enterprise clock for cryptographic inventory, testing, and rollout. Treating these as future issues is how backlog becomes breach.

Compliance As The Finish Line

Regulations such as GDPR, CCPA, and PCI DSS establish baselines. They do not guarantee safety. Under GDPR Recital 26, data that has been genuinely anonymized falls outside the scope of data protection law entirely. If a breach exposes only properly anonymized or tokenized data, no notification obligation is triggered because the data is no longer considered personal information or readable to an attacker. That is a strategy, not a loophole. The move reduces both risk and downstream legal exposure.

Tool-Centric Security

Many programs assemble a stack of best-of-breed tools without binding them to a clear control plane and data model. The result is alert volume without context and policies that drift. A modern program treats data as the organizing principle. Identity and policy form the control plane. Detection and response enrich that model. Tools serve the model, not the other way around.

Efficiency Over Security, Then Sticker Shock

Pushback against MFA, just-in-time access, and zero-trust policies typically starts with productivity objections. Then a breach lands, and the business pays for downtime, incident response, and regulatory scrutiny that dwarfs the cost of adding friction thoughtfully. The Sophos State of Ransomware 2024 report is based on a survey of 5,000 IT and cybersecurity leaders across 14 countries. They found that average ransom payments increased 500% year-over-year to $2 million. The average recovery costs, excluding the ransom, reached $2.73 million, nearly $1 million higher than the prior year.4 Friction added by design is cheaper than friction imposed by an attacker.

What A Data-Centric Model Looks Like

A data-centric model reframes the problem. Instead of asking how to keep attackers out, it asks how to make stolen data worthless and access pathways conditional. The practical levers are:

Map and Minimize Sensitive Data. Build an authoritative inventory of sensitive fields and where they live, move, and transform. Delete what is no longer needed. Minimize copies and shadow pipelines. Every redundant copy is a future incident.

Protect The Data Element. Use tokenization and application-layer protection to keep original values out of broad exposure zones. Format-preserving protection keeps downstream applications functioning while cutting risk.

Control Detokenization, Not Just Decryption. Centralize decision-making for when and where original values can be revealed. Apply MFA, step-up verification, and context such as location and time. Log every detokenization request for audit and analytics.

Enforce Continuous Verification. Apply zero-trust principles that treat every call as untrusted until proven otherwise. Tie access to identity strength, device health, and session risk. Privilege is earned per request.

Design For Secure Analytics And AI. Keep sensitive fields protected while enabling joins, segmentation, and model training. Use deterministic tokens for referential integrity and salted variants for lower reidentification risk. Keep detokenization out of batch and training paths by default.

Vaultless Tokenization: Strengths And Limits

Tokenization replaces sensitive values with non-reversible tokens that preserve format. Vaultless implementations generate tokens mathematically without storing a central mapping table, eliminating the vault-as-honeypot risk and a class of key management problems. Properly implemented, vaultless tokenization reduces blast radius because stolen tokenized data is useless without controlled detokenization. 

It mitigates insider abuse because authorized users see only tokens unless policy permits detokenization for a specific workflow. It lowers compliance burden: under PCI DSS v4.0, systems that process only tokens can fall out of scope because tokenized data is not classified as cardholder data, directly reducing the number of systems subject to audit, the cost of compliance, and the blast radius of any breach involving those systems. And it supports analytics because deterministic tokens allow joins and trend analysis without exposing original values.

It is not a silver bullet. Deterministic tokens must manage collision risk. Referential integrity across systems requires consistent configuration. Latency-sensitive workloads need careful sizing to avoid performance degradation. Backup, disaster recovery, and business continuity plans must include detokenization services. And tokenization protects the data element, not the entire system. It will not stop extortion if attackers exfiltrate unprotected files, screenshots, or secrets that live outside the tokenized path.

Metrics That Matter

Executives do not need another heat map. They need proof that risk is shrinking and productivity is intact.

Time To Contain Privilege Drift. Measure how quickly access to sensitive data is removed after a role change. The IBM 2024 report confirms the global average time to identify and contain a breach ran to 258 days, roughly eight and a half months, which is precisely why proactive access revocation is a leading indicator worth tracking rather than waiting for incident response to expose the gap.6

Sensitive Data Footprint. Count copies of sensitive datasets and the number of systems able to detokenize. Fewer copies and fewer detokenizers signal a lower blast radius.

Detokenization Audit Trail Quality. Track the percentage of detokenization events with a clear business purpose, approved owner, and linked ticket. Aim for complete attribution.

Scope Reduction. Measure systems moved out of PCI DSS or similar scope through tokenization. Convert the reduced scope into audit hours and the budget saved.

Incident Impact Delta. Compare incidents before and after data-centric controls. Focus on whether stolen records were tokenized and whether notification thresholds were avoided under privacy laws due to unreadable data.

Procurement Checklist For Data Protection Services

Procurement choices either de-risk or re-risk the program. Demand clarity on the following when evaluating tokenization or data protection services.

Architecture. Confirm how tokens are generated, how collisions are avoided, and whether format-preserving requirements are supported across character sets.

Policy Control. Verify that the detokenization policy is externalized, auditable, and callable from applications. Confirm support for just-in-time elevation and step-up verification.

Performance And Resilience. Require published latency targets, regional redundancy, and documented disaster recovery objectives. Insist on a clear SLA.

Ecosystem Fit. Validate certified connectors and reference patterns for your core platforms, message buses, and analytics stack.

Exit And Portability. Ensure there is a documented process to rotate tokenization domains and migrate off the service without exposing cleartext.

What To Fix First

Start where data concentration and business risk intersect. For most organizations, that is the customer system of record or the analytics pipeline feeding pricing and churn models. Insert tokenization at the application layer. Move detokenization behind policy. Enforce zero-trust checks for high-value operations. Then measure and communicate in business terms. When executives see scope shrinking and incident blast radius narrowing, support expands.

Conclusion

Security programs that concentrate resources on perimeter controls while attackers move through application layers and data pipelines are not under-resourced. They are misaligned. The gap between audit performance and breach frequency is not a funding problem. It reflects a structural choice to optimize for what is measurable in a quarterly review rather than what is exploitable in a live environment.

A data-centric model, anchored in tokenization nd centralized detokenization policy, does not eliminate that tension. It forces the organization to confront it directly. Protecting data at the element level means accepting integration complexity, renegotiating access norms, and retiring the assumption that perimeter controls transfer risk adequately. As telecommunications operators and other data-intensive sectors are already discovering, the programs that absorb that complexity early operate from a structurally stronger position when incidents occur. The ones that defer it continue funding controls that look strong in reports and perform poorly under pressure.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later