Why Is CISA Warning of Spyware in Signal and WhatsApp?

Why Is CISA Warning of Spyware in Signal and WhatsApp?

Sebastian Raiffen sits down with Rupert Marais, our in-house security specialist known for hands-on work in endpoint and device security, to unpack CISA’s latest alert on spyware and RATs aimed at high-value Signal and WhatsApp users. Rupert traces how attackers braid social engineering with technical payloads, compares regional tradecraft from the U.S. to the Middle East and Europe, and breaks down practical defenses—from FIDO authentication to Lockdown Mode and enterprise Android hygiene. Along the way, he shares case narratives that map the chain of compromise, what telemetry matters most, and how incident response decisions play out under pressure.

CISA warns about commercial spyware and RATs targeting Signal and WhatsApp users. What’s driving these campaigns now, and how do actors combine social engineering with technical payloads in practice? Walk me through a recent case you’ve seen, including timelines, victim profile, and measurable impact.

Three forces are converging: broader availability of commercial spyware, better operational security by actors, and messaging apps becoming the default channel for sensitive conversations. In practice, actors start with relationship-building—plausible pretexts, shared contacts, and urgency—then pivot to device-linking tricks or trojanized apps. In a recent case, a policy advisor received a “secure follow-up” via messaging that steered them to a spoofed app download and a device-linking QR. Within the same day, the attacker established persistence and began exfiltrating chat metadata and files. The measurable impact was immediate: unauthorized session creation, secondary payload delivery, and policy document leakage—clear proof that a human trust gap plus a technical foothold equals a fast-moving breach.

Russia‑aligned groups exploited Signal’s linked devices feature and device‑linking QR codes. How does that hijack typically unfold step by step, and where can defenders reliably break the chain? Share a concrete example, tooling used, and the user behaviors that made the attack work.

Step one is social engineering that persuades a target to “verify” or “link” a device. Step two uses a lookalike QR or a man-in-the-middle page that captures the linking token. Step three registers a secondary device and silently mirrors conversations, sometimes seeding additional payloads. Defenders can break the chain by enforcing out-of-band verification for any new link, alerting on new device enrollments, and training users to never scan linking QR codes outside the official app. In one engagement, the attacker used a phishing page styled to a messaging app’s linking UI; a hurried aide scanned the code during a late evening rush. Basic tooling included a web relay that proxied requests and replayed the token. The behavior that made it work was urgency and the assumption that “it’s still the same app, just on a different screen.”

The ProSpy and ToSpy Android campaigns impersonated Signal and ToTok in the UAE. What specific lures and app-store paths were used, and how was persistence achieved on devices? Compare infection rates you’ve observed, removal hurdles, and what telemetry most clearly flagged the compromise.

The lures leaned on geo-specific messaging—promises of “unblocked secure calling” or “UAE-optimized” chat. Distribution flowed through phishing links and unofficial storefronts that mimicked legitimate descriptions and icons. Persistence came from background services re-registering on boot and broad permission prompts framed as “call stability” or “media backup.” Infections clustered around users who sideloaded to bypass perceived restrictions; removal was slowed by users granting notification access and ignoring battery optimization prompts that kept services alive. Telemetry that stood out included unexpected accessibility service activation, steady foreground service notifications masked as “sync,” and outbound connections immediately after device boot. Infection prevalence was higher where sideloading was normalized; the cleans were slower when users had granted sweeping permissions.

ClayRat used Telegram channels and lookalike pages to push fake WhatsApp, Google Photos, TikTok, and YouTube apps. How do you spot these fakes in the wild, and what indicators persist after installation? Share a field anecdote, including hashes, hosting patterns, and user actions that sealed the infection.

We hunt for fakes by correlating channel posts with recently registered lookalike domains, signature mismatches, and permission sets that don’t align with the app’s public manifest history. After install, indicators persist as odd provider authorities, inconsistent package names versus app labels, and services that wake on network state change. In the field, a user followed a Telegram promo to a polished landing page that perfectly cloned a popular app’s visuals; the decisive mistake was tapping “allow” on an accessibility prompt pitched as “smart replies.” We documented the hosting pattern—fresh domains with copied CDN paths—and preserved file hashes internally, but we don’t publish them here. The infection sealed when the app asked to be excluded from battery optimization, ensuring its surveillance loop survived reboots.

A likely chained exploit hit fewer than 200 WhatsApp users via iOS and WhatsApp flaws (CVE‑2025‑43300, CVE‑2025‑55177). How would you reconstruct that kill chain, and what logging would you prioritize on iOS? Provide a timeline, artifacts to collect, and any post‑exploitation behaviors you’d expect.

I’d model it as a delivery through the messaging channel, a trigger leveraging the app-specific flaw, then privilege escalation or sandbox escape on iOS. With fewer than 200 victims, it suggests careful targeting and rigorous testing. I’d prioritize sysdiagnose captures, MobileInstallation logs, crash logs around the messaging process, network extension logs, and any profile or managed configuration changes. Artifacts include message database anomalies, unexpected background task scheduling, and altered notification settings. Post-exploitation, expect quiet data access—attachments, contact metadata, and possibly keychain items via process abuse—paired with minimal crash signatures to avoid detection.

Samsung’s CVE‑2025‑21042 enabled delivery of LANDFALL spyware to Galaxy devices in the Middle East. What capabilities did LANDFALL showcase in your analysis, and how did operators maintain command and control? Share detection tips, on‑device traces, and a remediation playbook you’ve used.

LANDFALL presented classic surveillance traits: message harvesting, file exfiltration, and microphone activation gated by context to reduce noise. C2 hinged on rotating endpoints and timing that mimicked normal app sync cycles. On-device traces included persistent foreground services masked as system sync, unusual job scheduler entries, and sensors invoked during idle periods. Detection tips: watch for accessibility toggles, notification listener grants, and boot-complete receivers that spawn multiple services. Remediation: isolate the device, capture a forensic image if feasible, revoke all high-risk permissions, remove the malicious package via ADB or MDM, rotate credentials, and push a full OS update. If trust is broken, re-provision from known-good media.

CISA notes targets include high‑value officials and civil society in the U.S., Middle East, and Europe. How do these attacker playbooks differ by region and role, and what metrics show that shift? Offer examples of tailored lures, language choices, and operational tempos you’ve tracked.

Playbooks adapt to local norms. In the Middle East, lures emphasize “uncensored calling” and “official” compatibility; in Europe, “privacy-grade compliance” resonates; in the U.S., “secure document review” hooks policy staff. Language mirrors local idioms and work rhythms, with outreach clustering around after-hours windows when support desks are quiet. Metrics we track include time-to-click from initial contact, permission grant rates after install prompts, and success of device-linking attempts following urgency cues. The tempo is unhurried with high-value officials—weeks of rapport—while civil society sees bursts aligned to news cycles.

CISA advises E2EE and FIDO phishing‑resistant authentication, and to move away from SMS‑based MFA. In real deployments, where do users and admins stumble, and how do you fix those gaps? Walk through a rollout plan, adoption metrics, and failure modes you’ve mitigated.

Stumbles happen at registration friction and lost-token recovery. We phase rollouts: pilot with security champions, enable FIDO platform authenticators first, then add hardware keys for high-risk roles. Training covers backup keys and device-bound enrollment. Adoption grows as users see fewer prompts and faster logins. Failure modes include fallback to SMS when keys are misplaced; we cap SMS as a time-bound emergency and require rapid re-proofing. Admins need clear attestation policies so unmanaged browsers can’t enroll silently.

The guidance says set a telecom provider PIN and use a password manager. Which account‑takeover paths do these two controls actually shut down, and where do they fall short? Share a story where each blocked an attack, including attacker steps and time-to-containment.

A carrier PIN blunts SIM swap attempts by adding a barrier at the point of number porting. A password manager eliminates password reuse and helps rotate credentials after an incident. In one case, a social engineer reached a carrier rep and pushed for a swap; the PIN requirement ended the call and bought time to warn the user—containment was immediate. In another, a phishing page harvested a password, but the manager’s unique credentials meant the blast radius was one account; rotation and session revocation contained it the same morning. They fall short if fallback channels are weak—think email recovery still tied to SMS.

CISA recommends avoiding personal VPNs. In your experience, when does a personal VPN increase risk, and what safer alternatives work for travel or crisis situations? Provide concrete examples, network indicators, and decision checklists you give to high‑risk users.

Personal VPNs can increase risk when the provider inspects traffic, injects root certificates, or concentrates your data for an adversary. They also break platform protections and trigger odd routing that flags you for extra scrutiny. Safer choices: system-native encrypted DNS, trusted corporate VPNs with device posture checks, and network-isolated travel devices. Our checklist: prefer cellular over unknown Wi‑Fi, verify captive portals, avoid sideloading, and assume that any new certificate prompt is hostile.

For iPhones: enable Lockdown Mode, enroll in iCloud Private Relay, and restrict app permissions. What’s the practical security lift from each control, and what workflows break? Share before‑and‑after incident data, tuning tips, and training scripts that improved compliance.

Lockdown Mode reduces the attack surface for messaging and web content, which matters against zero-clicks. iCloud Private Relay helps with IP privacy, and permission hygiene curbs lateral data access. Trade-offs include stricter media handling and some web features breaking; we prep users on what to expect and provide whitelisting paths. Before-and-after, we’ve seen fewer suspicious crash logs and reduced anomalous network calls. Short scripts help: “Pause and check any unexpected prompts,” “Decline broad permissions,” and “Report any new device-link alerts.”

For Android: pick vendors with strong security records, only use RCS if E2EE is on, enable Chrome Enhanced Protection, keep Play Protect on, and audit permissions. How do you operationalize this at scale? Give device selection criteria, MDM policies, and measurable outcomes.

We standardize on devices with long update commitments and timely patch histories. MDM enforces Play Protect, blocks unknown sources, and auto-revokes unused permissions. Chrome Enhanced Protection is locked on, and RCS is allowed only with E2EE. Outcomes include quicker patch SLAs and fewer sideload-induced incidents. We audit telemetry for permission drift and flag apps that request accessibility without a clear business need.

Attackers push spoofed apps through phishing and unofficial stores. How do you build a “trusted install” routine for high‑value users, and what checks should they perform every time? Describe a step‑by‑step process, tools to automate it, and success metrics from deployments.

Our routine is simple and repeatable: install only from official stores, validate publisher identity, pause on any permission that mentions accessibility or notification access, and require a second person review for sensitive roles. We automate with MDM catalogs and integrity checks. Success is measured by zero sideloads, reduced permission over-grants, and no foreground service impostors. If an app demands exclusions from battery optimization, it’s an automatic escalation.

Zero‑click exploits keep surfacing in messaging apps. How do you layer defenses when user interaction isn’t required, and what signals indicate silent compromise? Share examples of network anomalies, battery or crash patterns, and the triage flow your team follows.

We layer by shrinking the exposed surface (Lockdown Mode), tightening network egress, and monitoring for subtle device behavior shifts. Signals include unexplained data bursts aligned to idle periods, consistent micro-spikes in battery drain, and sparse but repeating crash logs tied to messaging processes. Our triage flow: isolate network, capture logs, compare to a known-good baseline, and decide between clean-and-restore or full re-provision. The guiding principle is speed—reduce dwell time even if it means starting fresh.

CISA suggests opting for the latest hardware and regular updates. Which chipset‑level or memory protections on newer phones most change the threat model, and how do you measure that improvement? Compare real incident rates across device generations and update cadences you enforce.

Newer hardware brings stronger memory protections and exploit mitigations that raise the bar for attackers, which directly reduces the viability of certain payloads. We measure improvement by tracking incidents that require kernel-level footholds and watching those trend down on the latest devices. Update cadence is critical: fast patch adoption correlates with fewer successful compromises. In program after program, modern devices under tight update policies experience noticeably fewer high-severity incidents than older, slow-to-patch fleets.

What is your forecast for mobile spyware?

Targeting will stay tight, tradecraft will get quieter, and supply chains for trojanized apps will keep improving. Expect more abuse of legitimate app features like device linking, and continued pressure on messaging platforms through both zero-click and low-click vectors. Defenders who standardize hardware, enforce strong authentication, and train for rapid isolation will have the advantage. The window of exposure is shrinking for organizations that treat mobile the same way they treat endpoints—systematically and relentlessly.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later