Cyber Landscape Sees Accelerating Risk in Late 2025

Cyber Landscape Sees Accelerating Risk in Late 2025

With a distinguished career centered on endpoint security and network management, Rupert Marais has become a leading voice in deciphering complex cyber threats. His expertise lies in connecting seemingly disparate incidents to reveal underlying trends in the threat landscape. In this conversation, we explore the persistent and evolving challenges that defined the close of 2025, from widespread vulnerabilities affecting core database technologies to the insidious abuse of trust in the software supply chain. Rupert offers his perspective on the long-term fallout from major data breaches, the growing risk of insider-facilitated attacks, and why even years-old security flaws are making a dangerous comeback. We’ll delve into the sophisticated tactics of state-sponsored actors who can manipulate infrastructure at its core, and discuss what it takes for organizations to move beyond a reactive state of constant patching toward a truly proactive security posture.

The “MongoBleed” vulnerability reportedly affects over 87,000 instances, including many internal resources. Beyond just patching, please walk us through the specific steps a security team should take to hunt for and remediate these instances, especially undiscovered databases not directly exposed to the internet.

The first thing to understand about a vulnerability like MongoBleed is that the headline number of 87,000 exposed instances is just the tip of the iceberg. The real danger often lies in the shadows of the network. A report from Wiz highlighted that 42% of all cloud environments they observed had at least one vulnerable MongoDB instance, which tells you this problem is pervasive and not just limited to what you can see from the outside. So, the initial step isn’t patching; it’s discovery. You can’t fix what you don’t know you have. A security team must immediately initiate a comprehensive asset inventory and vulnerability scanning campaign across their entire infrastructure, both cloud and on-premises. This involves more than just a simple network scan. You need authenticated scans that can look at version numbers on running services and cross-reference them against the affected versions—8.2.3, 8.0.17, and so on.

Once you have a complete list of vulnerable instances, you have to prioritize. An internet-facing database leaking memory is a five-alarm fire, but an internal database connected to a critical application is just as dangerous, if not more so, because attackers who gain an initial foothold will pivot to it. The remediation process then becomes a multi-stage effort. First, apply the patches released by MongoDB. Second, for any databases where patching isn’t immediately possible due to operational constraints, implement compensating controls. This could mean tightening firewall rules to restrict access to only essential application servers or placing the database behind an additional authentication layer. Finally, the hunt continues. You must analyze server memory and logs for any signs of exploitation of CVE-2025-14847. The vulnerability allows an attacker to leak sensitive data remotely from memory, so you’re looking for unusual access patterns or data exfiltration attempts that may have occurred before you even knew you were vulnerable. It’s a meticulous process of discovery, prioritization, patching, and post-mortem investigation.

The 2022 LastPass breach is still leading to millions in crypto theft due to weak master passwords. What does this long tail of impact tell us about data breach recovery, and what specific actions should users take to truly secure their assets after such a compromise?

The LastPass situation is a painful but powerful lesson in the concept of a breach’s “long tail.” It tells us that for certain types of incidents, recovery is not a one-time event; it’s a continuous state of vigilance that can last for years. The core issue is that the attackers made off with the encrypted vault backups. This means they possess a static, offline copy of users’ entire digital lives. They can take their time, using powerful offline cracking rigs to brute-force weak master passwords without ever alerting LastPass or the user. The fact that threat actors, believed to have ties to the Russian cybercriminal ecosystem, have managed to steal at least $35 million in crypto as recently as late 2025—years after the breach—proves this point with brutal clarity. The incident isn’t “over” until every single weak password protecting those stolen vaults is cracked.

For users caught in a breach like this, the necessary actions have to be absolute and immediate, especially if they store high-value credentials like cryptocurrency wallet keys. The first, most obvious step—changing your master password—is insufficient. It secures your live account, but it does nothing to protect the stolen, encrypted copy of your old vault. The only true remediation is to assume that everything inside that stolen vault is compromised. You must go through every single account stored in LastPass at the time of the breach and change the password. For financial accounts, crypto wallets, and email, this is non-negotiable. Furthermore, you need to enable multi-factor authentication everywhere you possibly can. That stolen vault didn’t contain your MFA keys, making it the single most effective defense against an attacker who has successfully cracked your old credentials. It’s a tedious, frustrating process, but the alternative is waking up one day to find your assets drained, long after you thought the danger had passed.

This week saw a malicious Trust Wallet extension and a fake WhatsApp API on npm, both abusing developer trust. From your experience, how has this attack surface evolved, and what practical validation process should developers follow before integrating third-party packages or extensions into their projects?

The attack surface targeting developers has evolved from opportunistic, clearly malicious packages to something far more insidious: functional clones that perfectly mimic legitimate tools while hiding a malicious payload. What we saw with the fake WhatsApp API on the npm repository is a prime example. It was named “lotusbail,” uploaded by a user named “seiren_primrose,” and it worked as advertised. Developers who integrated it found a fully functional API, which is why it was downloaded over 56,000 times. The poison was hidden inside: it intercepted every message and silently linked the attacker’s device to the victim’s WhatsApp account. The most terrifying part is that the compromise persists even after the developer realizes their mistake. Uninstalling the npm package removes the malicious code, but the attacker’s device remains linked to the account until the user manually disconnects it from their WhatsApp settings.

Similarly, the Trust Wallet incident wasn’t a flaw in the wallet itself but in the supply chain. An attacker, likely using a leaked Chrome Web Store API key, published a malicious version of the extension. It looked and felt legitimate to its one million users, but it siphoned off approximately $7 million. This shows a shift toward compromising the distribution channels that developers and users implicitly trust. To counter this, developers must adopt a Zero Trust mindset toward third-party code. Before integrating any package, the validation process should be rigorous. First, scrutinize the publisher. Is “seiren_primrose” a known, reputable developer, or a brand-new account? Second, use static and dynamic analysis tools to inspect the package’s code for suspicious behaviors, like unexpected network calls or file system access. Third, implement it in a sandboxed environment first to observe its behavior in a controlled setting. Finally, enforce a principle of least privilege. Does a WhatsApp API package really need the ability to read your entire contact list or download all media files? If the requested permissions seem excessive, that’s a massive red flag. Trust is no longer a viable security control in the modern software supply chain.

The Coinbase incident involved a bribed third-party contractor. When managing distributed support teams, what are the key security controls and monitoring metrics companies should use to mitigate the risk of these insider-facilitated breaches? Could you share an example of how this works in practice?

The Coinbase breach is a classic case study in the complexities of third-party risk management, especially with a globally distributed workforce. The core of the problem wasn’t a sophisticated hack; it was a human vulnerability. Hackers bribed contractors, specifically an employee named Ashita Mishra at a firm called TaskUs in India, to sell sensitive user data for as little as $200 per record. This relatively small act of corruption ultimately impacted nearly 70,000 individuals and shows that your security is only as strong as your most vulnerable partner. To mitigate this, companies need to move beyond simple background checks and implement stringent technical and procedural controls that assume an insider threat exists.

The most critical control is the principle of least privilege, enforced through granular access controls. A customer service agent should only be able to access the specific customer data required to resolve a ticket, and for a limited duration. They should never have the ability to perform bulk data exports or query the entire user database. In practice, this means implementing role-based access control (RBAC) tied to a ticketing system, where access to a user’s record is automatically granted when a ticket is opened and revoked when it’s closed. The second key control is continuous monitoring and behavioral analytics. You need to establish a baseline of normal activity for each role. For example, a typical agent might access 30-40 customer records per day. A monitoring system should automatically flag an agent who suddenly accesses 200 records or starts querying for high-net-worth individuals. Key metrics to track include data access volume, time-of-day access, access from unusual geographic locations, and attempts to access data outside of their designated role. When an anomaly is detected, it should trigger an automated alert and potentially lock the account pending a review. After the breach, Coinbase “tightened controls,” which almost certainly involved implementing these very measures to ensure that a single compromised contractor could never again cause such widespread damage.

Fortinet is warning about a five-year-old 2FA bypass flaw being actively exploited. Why do threat actors circle back to such old vulnerabilities, and what does this reveal about the common gaps you see in enterprise vulnerability management programs today?

It’s a common misconception that threat actors are always chasing the latest, most sophisticated zero-day vulnerabilities. The reality is that they are opportunists who follow the path of least resistance. Circling back to a five-year-old flaw like CVE-2020-12812 in FortiOS SSL VPN is an incredibly efficient strategy for them. They know that enterprise patch management is often inconsistent and incomplete. A critical appliance like an SSL VPN gateway is often deployed and then, if it’s working, left untouched for years to avoid disrupting business operations. Attackers are banking on this “set it and forget it” mentality. They maintain databases of old, reliable exploits because they know there’s a huge pool of unpatched systems out there just waiting to be compromised.

This pattern reveals a fundamental gap in many enterprise vulnerability management programs: a lack of continuous and comprehensive asset visibility. Many organizations are great at patching their core servers and endpoints, but they forget about network appliances, IoT devices, and other embedded systems. They might have run a scan when the device was first deployed in 2020, but they haven’t consistently re-evaluated its security posture since. The Fortinet flaw is particularly insidious because it’s not a full system compromise; it’s a 2FA bypass that occurs if the case of the username is changed during login. It’s a subtle logic flaw that might not be caught by generic scans. An effective program doesn’t just scan for new CVEs; it constantly re-scans the entire environment for old but newly exploited vulnerabilities. It’s about treating vulnerability management not as a project with a start and end date, but as a continuous, cyclical process of discovery, prioritization, and remediation. The fact that a five-year-old bug is making headlines is a clear sign that many organizations are failing at this basic, but critical, security discipline.

We saw Evasive Panda use sophisticated DNS poisoning while Belarusian authorities physically installed spyware. How should organizations and high-risk individuals adjust their threat models when facing adversaries who can manipulate core infrastructure or gain physical access to devices?

These two incidents perfectly illustrate the dual nature of advanced threats. On one hand, you have a sophisticated cyber espionage group like Evasive Panda, a China-linked APT, conducting adversary-in-the-middle attacks through DNS poisoning. They targeted specific victims to serve trojanized updates for popular applications like Tencent QQ, ultimately deploying their MgBot backdoor. This is a high-tech, remote attack that manipulates the very fabric of how the internet works to gain access. For an organization, defending against this requires a threat model that doesn’t implicitly trust core infrastructure. It means implementing DNSSEC to validate DNS responses, using VPNs to encrypt traffic even on seemingly trusted networks, and employing endpoint detection and response (EDR) tools that can spot the malicious activity even if the delivery mechanism is novel.

On the other hand, you have the brutally direct approach of the Belarusian authorities. They didn’t need a complex exploit chain; they used their physical authority to confiscate a journalist’s phone during an interrogation. Once they had the device and likely observed the PIN, they enabled “Developer Mode,” connected it to a PC, and sideloaded the ResidentBat spyware using simple ADB commands. This threat model bypasses nearly all network security controls. For high-risk individuals like journalists, activists, and dissidents, the threat model must extend to physical security. It means understanding that your device can be compromised the moment it leaves your possession. This requires using strong, alphanumeric passphrases instead of simple PINs, being aware of who might be watching you enter them, and enabling features that wipe the device after a certain number of failed login attempts. It also means recognizing that your greatest vulnerability might not be a software flaw, but a moment of coercion in an interrogation room. The adjustment for both organizations and individuals is to realize your adversary may not play by the conventional rules of cyber warfare; they will use whatever leverage they have, whether it’s control over an ISP or a badge and a locked room.

What is your forecast for how defenders can shift from a reactive patching cycle to a more proactive posture against these supply chain and insider-driven attacks?

My forecast is that the shift from reactive to proactive defense won’t be achieved through a single technology, but through a fundamental change in security philosophy centered on Zero Trust principles and secure-by-design development practices. The endless cycle of discovering a flaw, rushing to patch, and hoping you got to it before the attackers is unsustainable. The attacks we’ve discussed—from the malicious npm package to the compromised Trust Wallet extension and the insider threat at Coinbase—all exploited an implicit assumption of trust, whether it was trust in a code repository, a browser extension, or a third-party contractor. A proactive posture begins by eliminating that trust.

In practice, this means building environments where compromise is assumed. As highlighted in discussions around AI and security, it’s about being able to detect attacks that leave no files and have no traditional indicators. For developers, this means leveraging tools like Docker Hardened Images, which are now freely available, to build applications on a secure, minimal foundation from the very start, rather than trying to bolt on security later. It involves rigorously vetting every third-party library and API key, treating them as potential entry points. For organizations, it means implementing strict identity and access management controls where no user or service has standing access to critical data. Access should be granted on a per-session, just-in-time basis, and continuously monitored for anomalous behavior. The rise of dark AI tools like DIG AI, which can generate malicious code or phishing emails on demand, will only accelerate the need for this shift. The future of defense is not about building an impenetrable fortress; it’s about designing a resilient system that can detect, contain, and neutralize a threat that is already inside.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later