Study: How thermal cameras in autonomous vehicles can be tricked
Vulnerabilities in image processing of thermal cameras can trick autonomous vehicles and drones to see obstacles where there aren't any. But there is a remedy.
(Image: Dean Burton/Shutterstock.com)
A research team from the University of Florida has found that thermal imaging cameras in autonomous vehicles and drones used there for obstacle detection can easily be tricked by other heat sources. The thermal cameras then deliver misleading data, for example, simulating phantom obstacles. Physical access to the thermal cameras is not necessary for this.
Thermal imaging cameras are used in autonomous vehicles and drones whenever obstacles such as people, animals, machines, and other objects need to be detected in poor visibility conditions. At night, in fog, rain, and smoke, they can detect temperature differences and are thus not dependent on good visibility like conventional camera systems.
Vulnerabilities in Image Processing
In their study “The Heat is On: Understanding and Mitigating Vulnerabilities of Thermal Image Perception in Autonomous Systems” (PDF), presented at the 2026 Network and Distributed System Security Symposium, scientists from the University of Florida have uncovered three previously unknown vulnerabilities in the image processing of thermal cameras. These relate to image distortion, sensor calibration, and lens behavior. Problems due to these vulnerabilities can be triggered by naturally occurring or intentionally placed heat sources in the environment. The temperature perceived by the thermal camera can thus be altered, or the resulting data impaired, so that obstacles are no longer correctly recognized.
The special optics of the camera and the proprietary signal processing algorithms are influenced in such a way that real obstacles are not detected or the system is presented with non-existent obstacles. The data does not need to be manipulated through physical access to the system, as the issue lies within the image processing and the sensors themselves.
“Everything that we discovered is internal to the sensor, so the data are pretty much already manipulated when they are used by the drone or the car,” explains Sara Rampazzi, assistant professor at the Department of Computer & Information Science & Engineering (CISE) and lead author of the study. “We evaluate state-of-the-art algorithms and software running inside the cameras that are deployed by the manufacturers, and we're basically saying that they need to be safer.”
Countermeasures
To minimize the risks of targeted attacks on thermal cameras, the researchers developed “defensive signal processing techniques” that can detect and suppress suspicious thermal image signatures. The tools they developed access the internal algorithms of the thermal cameras directly to prevent the manipulation of measured values.
Videos by heise
Developing the tools required extensive research with real thermal datasets. For the necessary high computing power, the researchers used the HiPerGator supercomputer at the University of Florida. With it, they simulated various attack scenarios and analyzed the behavior of their model to identify potential vulnerabilities.
The research team has alerted some thermal camera manufacturers to the problem and urged them to adapt their algorithms. Whether they do so or not, the researchers will likely not find out. Rampazzi does not expect any feedback from the companies, which generally do not disclose information about their proprietary algorithms.
(olb)