"Behind every data point is a person"

A researcher calls for more interoperability, infrastructure expansion, and careful handling of patient data for the use of AI in healthcare.

listen Print view
Head of an artificial person with various icons next to it related to the healthcare sector.

(Image: ArtemisDiana/Shutterstock.com)

3 min. read
Contents

The expansion and networking of existing infrastructures are important for the use of AI in medicine, explained Dr. Jacqueline Lammert, head of the research group AI for Women's Health at the Technical University of Munich at the 7th Digital Health Symposium of the Technology and Methods Platform for Networked Medical Research (TMF).

"We are building on existing infrastructures. We have excellent data integration centers in Germany, and we can expand them with our high-performance GPU clusters. And if we then use open-source standards, such as Kubernetes, and if we also use core datasets to document this data homogeneously, then we can ensure interoperability." Core datasets are standardized health data required to ensure seamless and lossless information exchange between different IT systems and organizations.

Based on these standards, secure, European cloud infrastructures are then needed. Lammert wishes for this, among other things, for real-time data processing. However, it must not be forgotten that behind every data point is a person and that these data must be handled with care.

She sees open standards and open-source software as a fundamental prerequisite for digital sovereignty in European healthcare. Currently, there is too much dependence on a few hardware and cloud providers. "We order from Nvidia, simply because they have the monopoly on it," Lammert said in a subsequent discussion. Simply going to the cloud, for example with a municipal hospital, is not easy at the moment.

Lammert, who herself fine-tunes LLMs with medical data, was critical of Large Language Models (LLMs). In her opinion, they cannot simply be let loose. "We need to train staff and inform them about the risks in particular. Because we know that errors happen and hallucinations can also happen. And we cannot prevent them, but we can control them." Innovations are not simply about "making something paperless in the digitization process, but it's really about transformation."

Another problem area, according to Lammert, lies in data quality, as she explained in her presentation: "Over 80 percent of all data is in unstructured format." Her team has already proven with LLM-supported methods that diagnoses, therapies, and biomarker profiles can be precisely derived from text data.

One example of this is the joint project GoTwin ("Gynecologic Oncology – Targeting Women's Individual Needs"), which aims to develop personalized therapies for patients with ovarian cancer. For this purpose, digital images of the patients are created, which combine imaging data, laboratory data, genetic profiles, and therapy courses.

"We want to create virtual images of these women. And not just from the doctor's letters, from tabular data and imaging data, but we want to represent these therapy courses really well. This means modeling long-term courses to set up the prediction models there, to make better therapy predictions."

Trust in AI arises, according to Lammert, "by actively involving people. And not just when it comes to validating an answer ... because people have the last word, but also the first."

(mack)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.