Autonomous vehicles: With uncertain sensor data, AI hits the brakes
If the sensors of autonomous vehicles are disturbed, their AI systems make wrong decisions. Scientists from Magdeburg claim to have found a solution.
(Image: Jana DĂĽnnhaupt/Uni Magdeburg)
Autonomous vehicles bring a fundamental problem with them: they must not only perceive their surroundings, but also recognize when their sensors can no longer deliver reliable results. Such situations arise primarily due to poor weather conditions or limited visibility: fog, rain, snowfall, or road vegetation interfere with the sensors of autonomous vehicles, causing their AI systems to make imprecise decisions.
Scientists at the Otto von Guericke University Magdeburg have therefore developed an AI-supported procedure that recognizes uncertain data from camera and lidar sensors and, in case of doubt, brings autonomous vehicles to a halt. According to Christoph Steup, the project leader, the methodology combines machine vision with machine self-assessment. The AI thus analyzes not only what it perceives, but also how reliable the underlying data is. If the quality of the sensor data drops below a certain threshold, the system reacts automatically.
Successful tests on autonomous shuttle
The research project AULA-KI, funded by the federal government, attempts to make precisely these thresholds of data trustworthiness technically detectable. The team at the University of Magdeburg has already partially implemented the developed concepts in software and tested them on an autonomous vehicle at the university's Galileo test field. The EasyMile EZ10 shuttle used is equipped with 8 lidars and two cameras; it can transport up to six people.
The AI-based procedure reliably detected the interference of the autonomous shuttle's sensors caused by fog, rain, and snow during test operations. In the case of moderate rain or snowfall, it was even able to partially compensate for the interference.
However, the sensor data could not always be fully corrected, especially in heavy rain or snowfall. The goal then was to get the AI systems of the autonomous vehicle to stop in a controlled manner. According to Steup, this was achieved quite reliably. "In case of doubt, the system preferred to be too cautious than too risky," says the project leader. However, the crucial factor for the safety of autonomous driving is the upstream step: an autonomous vehicle must recognize that the data basis is disturbed before it reacts incorrectly.
Videos by heise
Research could be socially relevant
Steup sees high social relevance in the research results. Autonomous vehicles will only be socially accepted if they remain reliable and safe even under difficult conditions. The procedure developed at the University of Magdeburg provides key building blocks for this.
Whether and when the system could be used in series production vehicles is currently unclear. However, according to the press release from the University of Magdeburg, the transfer into concrete applications is an explicit goal. The university has not yet announced any concrete plans or products.
It is also questionable whether the technology developed in Magdeburg is capable of protecting itself against attacks resembling prompt injections. Only recently, scientists from the University of California, Santa Cruz, and Johns Hopkins University showed that the AI systems of autonomous vehicles can be deceived using prepared signs. This is because the AI models examined in the tests did not evaluate the texts displayed on the signs as pure information, but as commands to be executed. In up to 95 percent of cases, the tested AI models could be successfully misled into making wrong decisions.
(rah)