39C3: Skynet Starter Kit – Researchers control humanoid robots via radio & AI

At 39C3, experts demonstrate how poor the security of humanoid robots is. The range of attacks extends to jailbreaking the integrated AI.

listen Print view
The team on stage at 39C3

(Image: CC by 4.0 media.ccc.de)

4 min. read
Contents

The vision is enticing: humanoid robots are to take over "dirty" or dangerous jobs for us in the near future. Corporations like Tesla and its owner Elon Musk are driving the topic forward, but the market leader in terms of unit numbers is often the Chinese manufacturer Unitree. Its model G1 is already being massively distributed – according to researchers Shipei Qu, Zikai Xu, and Xuangan Xiao, over 50,000 units have been sold. However, while the hardware is making impressive progress, IT security seems to play hardly any role in development. Under the provocative title "Skynet Starter Kit," the experts at the 39th Chaos Communication Congress (39C3) in Hamburg dissected the robot ecosystem.

The Unitree G1 is controlled by default via an app or a radio remote control similar to a game controller. Shipei Qu from the Chinese IT security company Darknavy explained on Sunday that the team investigated the radio module through black box reverse engineering, as the manufacturer had removed the chip markings. By using Software Defined Radio (SDR) and "educated guessing," the trio discovered that the robot communicates on the LoRa protocol in the 2.4 GHz band.

The result of the analysis was alarming: there is no encryption and only extremely weak authentication. The researchers were able to crack the so-called "Sync Word Parameter" (2 bytes) via brute force and thus take control of other robots. In a recorded demo, they showed how an attacker can remotely control a G1 without ever having had physical access or the pairing password. Unitree's response to this finding: the vulnerability can only be closed in the next hardware generation.

Videos by heise

Zikai Xu shed light on the network interfaces. The robot communicates with the internet and the smartphone app via protocols such as WebRTC and MQTT. Here, the researchers encountered fundamental design flaws. For example, the password for remote access is often trivially derived from the device's serial number.

Even more critical is the attack on the "Embodied AI Agent." The G1 uses ChatGPT's large language model (LLM) to interpret voice commands and translate them into actions. The researchers succeeded in a prompt injection attack: through targeted sentences, they induced the AI to execute system commands with root privileges. This turns the AI, which is actually supposed to facilitate interaction, into a Trojan horse that grants attackers full access to the operating system (a root shell). From here, not only can the video stream from the head camera be intercepted, but theoretically, a botnet of thousands of robots can also be coordinated.

The work of Xuangan Xiao, who dealt with the manipulation of motion control, is also impressive. The cheaper "Air" version of the G1 is software-limited in such a way that it cannot perform certain complex movements. To bypass these restrictions, the team analyzed the deeply obfuscated binary files of the control system.

(Image: CC by 4.0 media.ccc.de)

The tinkerers discovered a virtual machine (VM) with around 80 custom instructions, which only serves to protect the actual logic from reverse engineering. After two weeks of intensive work, they were able to disassemble the VM and patch the firmware. This not only unlocked restricted functions but also "taught" the robot dangerous movements. In a second demo, they used this control to make the robot perform targeted, powerful boxing punches against a test dummy upon a codeword. Terminator, anyone?

The researchers draw a grim conclusion. According to them, current commercial robots are networked, AI-controlled cyber-physical systems that lack basic security mechanisms. While manufacturers like Boston Dynamics (Spot) presented detailed security concepts, mass manufacturers like Unitree prioritized the protection of their intellectual property over that of their users. The fact that Unitree only started building a dedicated security team this year underscores, according to the Darknavy testers, how far the humanoid builder industry still lags behind common IT security standards. Asimov's "Three Laws of Robotics" are currently a distant illusion in the world of Unitree & Co.

(Image: CC by 4.0 media.ccc.de)

(nie)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.