Study: AI model "Delphi-2M" predicts disease risks
Delphi-2M is designed to assess disease risks up to 20 years in advance. A study on this has now been published in Nature.
(Image: PopTika/Shutterstock)
An AI model developed at the European Molecular Biology Laboratory (EMBL) should be able to predict disease risks for more than 1000 diseases – and not only for individuals, but also for entire populations. The system called Delphi-2M (with two million parameters) is based on a transformer architecture, which is also used in large language models.
Previous systems were mostly limited to individual diseases. Delphi-2M, on the other hand, is designed to simultaneously recognize patterns across many diagnoses and predict so-called "health trajectories" – i.e. individual disease progressions –. According to the researchers, the modeling extends up to 20 years into the future. The results were published in the journal Nature.
(Image:Â Shmatko et al.)
Delphi-2M was trained using clinical data from 400,000 patients from the British UK Biobank, including factors such as body mass index and consumption habits. For validation, the team drew on a Danish register with 1.9 million people. Initial tests show: In terms of the risk of heart attacks, certain tumors or mortality, the system provided similarly reliable predictions as specialized models. On average, Delphie-2M achieves a C-index – with which prediction models are evaluated – of around 0.85 over a 5-year period.
Videos by heise
The technology has limitations where clinical pictures are complex, irregular or rare, for example in the case of mental disorders or pregnancy complications. This is because there is less training data on rare diseases, for example. In addition, the training data is not yet representative of the population as a whole, as the UK Biobank mainly contains data from older and British participants.
"We are still talking about the future here. The path to actual medical application is usually longer than you think. Despite all the potential, we must not get caught up in AI-based crystal ball gazing – Even the best models recognize patterns, but they do not predict the future. It must be clear to patients that such prognoses are not judgments of fate. However, they can provide clues for prevention or treatment decisions," says Prof. Robert Ranisch, Junior Professor of Medical Ethics with a focus on digitalization at the University of Potsdam. It is also important "that the use of such models does not restrict patients' scope for decision-making. Their autonomy in the present must not be subordinated to a treatment regime that is geared solely towards future health. Even where this does not happen, there would still be a certain compulsion to act in accordance with predicted futures. The right not to know therefore remains crucial."
Coveted by insurance companies and employers
"At the same time, it is to be feared that such AI models will arouse false desires – among insurance companies or employers, for example, especially beyond Germany. In this case, it is less about whether the predictions are actually reliable and more about the illusion of exact predictability. This can lead to people being unfairly disadvantaged. We therefore need to think very carefully about where we want to use such models in the healthcare system," Ranisch points out. Ethics and law have "so far often been based on binary categories of healthy or sick", and in "digital and preventive medicine [...] shades of gray are often decisive". Ranisch also raises further questions about what it means when healthy people fit into a pattern of "soon-to-be sick people", or how health information should be protected "when a large amount of personal data suddenly becomes relevant for AI predictions".
"When it comes to the question of who and how the technology should be used, a distinction must be made between two cases: Its use to assess developments in the entire healthcare system and its use to make statements about individuals," explains PD Dr. Markus Herrmann, Head of AI Ethics at the Institute for Medical and Data Ethics at Heidelberg University. The former would be comparatively unproblematic, while the latter would require consideration of the fact that people also have a right to "not knowing" –, i.e. a right "not to lead their lives in worry or even fear of impending illness".
(mack)