Study: Potentially dangerous AI-generated proteins are not always recognised
A study shows that AI can create dangerous proteins that bypass common security checks. Software patches should help.
(Image: Sergei Drozd/Shutterstock.com)
The ability to design new proteins using artificial intelligence is considered one of the most fascinating and at the same time riskiest developments in modern life sciences. The technology opens up entirely new possibilities for medicine, materials research and sustainable production – but it also raises questions about biosecurity. Improved detection software should provide a remedy.
A study by Wittmann et al. published in the journal "Science" shows that AI systems for protein design generation are indeed capable of producing variants of dangerous proteins that commercial biosafety screening systems sometimes fail to recognize.
These companies are a critical control mechanism to prevent the misuse of the technology, for example for the production of bioweapons. The study is intended to serve as a stress test for current security mechanisms and demonstrate their limits with regard to generative AI.
AI-generated proteins against commercial scanners
For the study, the researchers used an open-source AI to create over 75,000 variants of known dangerous proteins. They presented these to four different commercial biosafety screening systems (BSS) for testing. The result was clear: while the systems reliably recognized the original sequences of the proteins, the recognition rate for the AI-generated variants, which had a similar function with a different sequence, was unreliable.
Videos by heise
In collaboration with the BSS providers, the authors then developed software updates that significantly improved the detection. However, they were still unable to achieve 100% detection of all potentially dangerous variants.
The reaction of other scientists was mixed. "The risk has increased significantly with the new AI-based technology," comments Prof. Dr. Gunnar Schröder from Forschungszentrum Jülich. The technology is now accessible to a much larger group of scientists than just a few years ago. Prof. Dr. Jens Meiler from Vanderbilt University criticizes the presentation of the study: "The study is problematic in this respect because it suggests that science has not yet dealt with the topic – but we have been doing this for two to three years."
He refers to existing initiatives such as the guideline on the responsible use of AI in biodesign. This assessment is shared by Professor Clara Schoeder from the University of Leipzig, who also cites methodological weaknesses in the study. For example, the dangerousness of the proteins was only predicted on a computer basis ("in silico") and not validated in the laboratory. In addition, a high level of expertise and malicious intent is still required for the targeted production of dangerous proteins.
Technical arms race and ethical guard rails
The debate centers primarily on the appropriate countermeasures. With the software patches developed, the study itself shows a technical solution approach that is, however, reminiscent of the "hare and hedgehog race" described by Prof Dr Birte Platow (TU Dresden): a constant race between offensive and defensive technologies.
Regulatory and ethical approaches are also required. Prof. Dr Dirk Lanzerath from the German Reference Centre for Ethics in the Life Sciences emphasized the need for binding policies and the "ethics by design" principle, in which ethical considerations are already integrated into the development process. Considering the global risks, an international exchange of standards is indispensable.
At the same time, Clara Schoeder warns of the negative consequences of overly strict regulation. This could hinder legitimate research, such as the development of vaccines based on viral sequences, due to lengthy authorization processes. The scientific community therefore also relies on self-commitment and social control, as Birte Platow emphasizes.
(mack)