PanoRadar: Radio waves give robots superhuman vision

The better robots can see, the better they can interpret their surroundings. Where conventional sensors fail, radio waves and AI can help.

listen Print view
Freddy Liu, Haowen Lai and Mingmin Zhao together with the PanoRadar they developed

Freddy Liu, Haowen Lai and Mingmin Zhao (from left to right) together with the PanoRadar they developed.

(Image: University of Pennsylvania)

4 min. read

A research team from the School of Engineering and Applied Science at the University of Pennsylvania (Penn Engineering) has created a way to give robots superhuman vision with the radio wave-based PanoRadar. This enables robots to perceive their surroundings even under difficult conditions, such as fog and smoke.

In poor weather conditions, light-based vision sensors such as cameras or lidar (light detection and ranging) fail. But that doesn't have to be the case. A look at nature reveals that many creatures are able to perceive their surroundings even under adverse conditions, such as bats through echoes of sound waves and sharks that recognize prey via electric fields.

Robots usually use cameras and lidar to orient themselves in their environment. Both work well as long as there is sufficient light. If not, some robots use conventional radar to improve their perception of their surroundings. The disadvantage of this is that the radar images are quite coarse with a low resolution.

Videos by heise

According to the scientific team, environment perception via radio waves is the universal solution, as they describe in the study "Enabling Visual Recognition at Radio Frequency", which was published in Proceedings of the 30th Annual International Conference on Mobile Computing and Networking. According to the study, the researchers have succeeded in creating 3D views of environments in near real time using simple radio waves and artificial intelligence (AI).

"Our initial question was whether we could combine the best of both acquisition modalities," says Mingmin Zhao, Professor of Computer and Information Science. "The robustness of radio signals, which also work in fog and other difficult conditions, and the high resolution of visual sensors."

The sensor developed by the scientists works like a kind of lighthouse. A vertical antenna array rotates in a circle and emits long-wave radio waves. The reflections are picked up again and interpreted. AI algorithms that combine measurements from all rotation angles help to obtain more measuring points and increase the image resolution.

"To achieve a resolution comparable to lidar with radio signals, we had to combine measurements from many different positions with sub-millimeter accuracy," explains Haowen Lai, the lead author of the study. "This becomes particularly difficult when the robot is moving, as even small movement errors can significantly affect the image quality."

The researchers developed a method to teach the system what it is currently seeing. To do this, they trained an AI model with lidar data from different environments and matched it with the patterns and geometries of the captured spaces. With the help of the model, the PanoRadar can precisely recognize its surroundings.

The system can also see through fog and smoke and locate objects precisely. In contrast to lidar, it is even possible to detect glass walls, as the researchers found out in tests. This works because radio waves are not so easily blocked by air particles.

Thanks to the high resolution, people can be recognized precisely. This means that the PanoRadar can also be used in autonomous vehicles in all weather conditions. The system is not expensive, according to the scientists at Penn Engineering: the price is only a fraction of that of a typically expensive lidar system. However, PanoRadar should preferably not be used on its own. Only in combination with other environment detection systems, which also have their advantages, can robots perceive their surroundings optimally.

(olb)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.