Could LeRobot’s Pickle RCE Let Hackers Seize Your Robots?

Could LeRobot’s Pickle RCE Let Hackers Seize Your Robots?

A lab robot that obediently fetches parts could just as easily fetch the wrong ones—or ignore safety rails entirely—if an attacker can steer its brain from afar through a network message disguised as “policy data.” That unsettling scenario moved from theory to practice with CVE-2026-25874, a critical 9.3 CVSS remote code execution flaw in LeRobot, an open-source platform for robotics from the machine learning ecosystem. At issue is unsafe deserialization through Python’s pickle in LeRobot’s async inference path, where the PolicyServer and robot clients accept unauthenticated gRPC traffic without TLS. In that pipeline, calls such as SendPolicyInstructions, SendObservations, and GetActions feed attacker-controlled bytes into pickle.loads(), a function known to run arbitrary code during deserialization. With that single mistake, an internet-facing port could become a universal remote for robot fleets and the machines they control.

The Vulnerability: How Pickle Opened the Door

Technical Mechanics and Proof of Exploitation

LeRobot’s async inference loop stitched together policy exchange and action selection over gRPC, but it also wired deserialize-and-execute into the hot path by using pickle.loads() on untrusted inputs. Because pickle opcodes can instantiate classes and invoke reduce payloads, any object graph sent through those endpoints can execute OS commands under the PolicyServer’s privileges. Security researcher Valentin Lobstein verified end-to-end exploitation against v0.4.3, demonstrating unauthenticated RCE with a single crafted request. The risk compounded because the relevant services reportedly ran without TLS or client authentication, turning routine inference messages into a delivery vector for shell access. While the project acknowledged the exposure and scheduled a fix for v0.6.0, the affected code path remained reachable, leaving deployments vulnerable.

Scope, Impact, and Project Stance

An attacker who lands code execution on PolicyServer gains a beachhead with reach into model files, API tokens, SSH keys, and stored datasets, and can pivot laterally across internal networks where robots, training rigs, and storage share flat segments. Manipulating action outputs also enables covert sabotage: tampering with GetActions responses can cause erratic motions, introduce subtle drift, or brick calibration. There is operational fallout, too—service disruption, data exfiltration, and model corruption all follow from the same primitive. The flaw was independently raised by “chenpinji” in December 2025; maintainers said the vulnerable routines started as experimental scaffolding and that security lagged as the tool matured. The irony was hard to miss: despite championing Safetensors to avoid pickle’s risks in ML artifacts, the platform still deserialized network-controlled payloads, reportedly even suppressing linter warnings.

Remediation Path: Locking Down LeRobot

Immediate Containment and Defensive Hardening

Mitigation begins at the edge. Block PolicyServer ports from the internet, enforce mutual TLS on gRPC, and require per-robot API credentials before any policy or observation exchange. Where upgrades allow, disable pickle-based paths entirely and substitute safer formats—Protocol Buffers, JSON with strict schemas, or tensor-safe containers such as Safetensors for numerical data. Introduce syscall-level controls with seccomp, AppArmor, or SELinux to confine the process, and run PolicyServer as a non-root service with read-only mounts for model directories. Network segmentation should isolate robots and inference nodes from build systems and secrets stores; treat the server as a high-risk boundary and feed logs into SIEM rules that flag anomalous object sizes or unexpected method calls. Rotate compromised credentials, invalidate access tokens, and audit model integrity after any suspicious activity.

Sustainable Fixes and Lessons for AI-Robotics Pipelines

Moving beyond stopgaps, production-grade deployments should replace pickle with explicit serialization of actions and observations, validate message types, and add replay protection and rate limits on gRPC methods. Adopt a staged rollout: feature flags to kill unsafe code paths now, backport of authentication for v0.4.x–v0.5.x lines where feasible, and full refactors landing in v0.6.0 with typed protobuf contracts and transport encryption by default. Secure coding policies must prohibit deserializing untrusted input with execution-capable formats; automated CI checks should fail builds that import pickle in network layers. Treat robot control planes like industrial OT: conduct red-team exercises against PolicyServer, require out-of-band emergency stops, and continuously test drift detection on model outputs. By aligning design, process, and tooling, organizations could have contained this class of bug and will be better positioned when the patch shipped.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later