Researchers warn of visual attacks in Augmented Reality
Virtual signs, wrong turns: An AR demo shows how dangerous subtle manipulation in Augmented Reality can be.
(Image: khoamartin/Shutterstock.com)
An experiment by Duke University shows how easily people can be deceived by manipulated visual cues in augmented reality. The project addresses a potential security problem: as AR information becomes part of everyday life, even a small change could be enough to influence perception and behavior.
Miniature city as a testbed for deception
At the MobiHoc conference 2025 in Houston, Yanming Xiu and Maria Gorlatova presented an interactive miniature city, viewed through the passthrough mode of the Meta Quest 3. In mixed reality mode, the VR headset records the real environment and blends it with digital overlays that appear in real-time on the headset's display.
In the demo, the researchers altered street signs and building labels: a hospital suddenly bore the inscription “Hotel,” and a stop sign appeared at an intersection where it didn't belong. Participants navigated a remote-controlled toy car through this distorted environment and promptly fell for the false cues. In one trial, two out of three test subjects went off course without noticing the error.
The researchers call this “Visual Information Manipulation,” or “VIM” for short: a deliberate alteration or addition of visual AR content. This form of deception can have harmless effects, such as irritation in a game, but also dangerous ones—for example, if manipulated AR navigation signs lead drivers into risky situations or, in a medical context, incorrect information is displayed.
True AR glasses like Meta's Orion and Snap's Spectacles are not yet widely available. However, Snap has already announced a compact consumer version for 2026, and Meta is likely preparing its prototype for the mass market as well. But even current AR devices, such as those used in industry, draw content from various data sources. If these are compromised, attackers can inject false overlays. Areas of application where visual cues must be followed under time pressure are particularly sensitive.
Videos by heise
Ways to protect against visual manipulation
To detect manipulations, AI-powered detectors already exist that simultaneously evaluate the real and virtual field of view. Xiu and Gorlatova already presented a system called “VIM Sense,” which combines image and text recognition, for example, to automatically report contradictions between real and virtual content. In a test run, almost 89 percent of manipulations in an AR dataset were detected.
In addition to technical monitoring, design and regulatory issues are also coming to the fore. Transparent AR objects or visible origin labels could ensure that the source of an overlay remains recognizable. A kind of “reality button” on the devices, which could immediately hide all overlays, could also offer an emergency function against deception.
Augmented reality has not yet entered the daily lives of most people. However, the short miniature city demo already shows that such attacks are not just theoretical. The Duke University team plans further experiments on other devices, such as the Vision Pro. Apple's headset is more powerful than a Quest 3 and is likely to be even more convincing with its clearer and higher-resolution passthrough feed. A comprehensive study is also planned. The researchers' goal is not only technical enlightenment but also to raise awareness that AR environments are manipulable.
(joe)