Europol sheds light on the weaknesses of biometric identification

According to Europol, biometric recognition systems offer "a high level of security" in principle. But it is important to recognize the many ways of attack.

listen Print view
Artistic representation of a glowing fingerprint from which vertical lines of light rise up, each with a number 0 or 1 at the tip

(Image: ktsdesign/Shutterstock.com)

4 min. read

The new edition of the black-red coalition government has resolved to allow security authorities to "retrospectively compare biometric data with publicly accessible internet data" using artificial intelligence (AI), for example. However, a study by Europol – from the field of law enforcement itself – is now pouring water into the wine of surveillance advocates. According to the analysis, biometric recognition systems are "generally robust". However, there are numerous ways to trick them. It is therefore crucial to "know the weak points of such systems".

Many biometric recognition systems have long been considered cracked. For example, hackers from the Chaos Computer Club (CCC) have been creating dummies with digital fingerprints for many years. They have also circumvented vein, iris and facial recognition systems. Europol's Operations and Analysis Center and Innovation Lab are now replicating these vulnerabilities and drawing conclusions for the work of investigators.

The authors of the published report focus on so-called presentation attacks on the detection device. These are aimed at impersonating a legitimate user or bypassing detection. In essence, an imitated or falsified biometric feature is foisted on the biometric data capture system with the aim of disrupting or undermining the process.

Presentation attacks in which fingerprints are imitated can also be carried out without a person's consent, explains Europol. In such non-consensual approaches, fingerprints are obtained from smooth or non-porous surfaces, such as glass. Alternatively, digitally generated fingerprints, which are usually used to train biometric recognition systems, could be used to create counterfeits, for example using 3D printing. Fingerprints could also be deliberately altered to evade recognition. Papillary ridges are normally damaged by working conditions or accidents, but they can also be destroyed intentionally.

Given the abundance of digital photos in social networks and other public areas, it is also easy, according to the authors, to get hold of images to impersonate another person during automated facial recognition. The success rate of identity fraud can be measured, for example, by whether this method can be used with less sophisticated smartphones. These can sometimes even be fooled by a simple paper printout.

The experts describe "print and screen replay" attacks, in which the image of the victim is presented to the camera on a printout or screen, the use of masks or even just make-up, as possible attacks in this area. Somewhat more sophisticated are "face morphs", in which two faces are merged "so that both people in the original photos can be identified with the same morphed image". This approach is usually used to provide photos for ID documents. Deepfakes can also use AI-generated images and videos to imitate identities and voices in a deceptively similar way, even in live situations.

According to Europol, methods for outsmarting iris recognition include paper printouts, screen displays, artificial eyeballs and replay attacks. Structured contact lenses are often used for this purpose.

Videos by heise

Conversely, there are now also numerous standardized techniques for protecting biometric features and recognition systems based on them, according to the study. Hardware-based procedures focus on capturing additional data to prevent successful attacks. Special software could be used to detect traces of presentation attacks. The inclusion of life detection factors is important for all modalities. However, the testing of secure evaluation schemes is only based on known attacks. It is therefore significant to anticipate the development of new methods and to share knowledge of new presentation attacks between law enforcement agencies.

According to the recommendations, it is crucial to take a holistic view of the identification process – from the moment of personal registration to verification and encrypted data storage. It should be borne in mind that "weak" biometric data from one system can be misused to attack other systems. Therefore, any compromised biometric data increases the threat to other automated recognition techniques, regardless of how secure these are themselves.

(ds)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.