AI Offensive: Hospitals Demand Data Access, Legal Certainty, and Funding

Doctors and nurses are expected to benefit from AI, but clinics need stable technical, legal, and financial foundations.

listen Print view
Doctor with a tablet, with digital symbols for AI etc. above it.

(Image: LALAKA/Shutterstock.com)

5 min. read

Hospitals see great future opportunities in Artificial Intelligence (AI) but warn of structural hurdles. In a position paper, the German Hospital Federation (DKG) calls on the federal and state governments to specifically create legal, technical, and financial foundations for the meaningful and safe use of AI applications in healthcare. "Successful implementation of AI requires clear legal frameworks, a broad, interoperable data basis, AI-capable infrastructures, effective networking, and targeted support projects for clinics," says Prof. Dr. Henriette Neumeyer, Deputy Chairwoman of the German Hospital Federation (DKG).

According to the DKG, AI systems could help improve diagnoses, personalize treatment processes, and relieve doctors and nursing staff of routine tasks. Algorithms are already being used today in areas such as radiology or pathology. However, the real problem lies not in the technology but in its implementation.

The DKG criticizes, for example, the Hospital Future Act (KHZG), which promotes digital applications but does not provide funds for building AI-capable infrastructures. There are also gaps in the Health Data Use Act (Gesundheitsdatennutzungsgesetz, GDNG) – it needs to be clarified that pseudonymized patient data can also be used for training AI models. For this, the DKG proposes a broad consent – a one-time, transparent consent from patients for data use for research purposes.

According to the DKG, "high-quality, diverse, and interoperable real-world data" from research and care are needed to develop trustworthy models. Here, hospitals refer to existing initiatives such as the federally funded Medical Informatics Initiative (MII) or the Network University Medicine (NUM), which, however, urgently need to be expanded and linked with other institutions and service providers.

Videos by heise

Furthermore, the DKG demands a clear line on liability for incorrect or difficult-to-understand AI decisions. While the new EU AI Regulation points the way, it still leaves many questions open – for example, where the responsibility of manufacturers ends and that of users begins. It is also important to develop mechanisms against "Automation Bias" so that AI is not blindly trusted.

The therapeutic sovereignty of doctors, nurses, and members of other health professions in hospitals must also be protected. "The use of AI applications should not create a pressure to justify for the user, especially if they deviate from the AI's suggestions," it states in the position paper (PDF).

Another point of the paper is the development of AI competence. Hospitals should be able to systematically train employees to operate applications safely and recognize risks. This must be reflected in the financing. The DKG also proposes establishing "AI Hubs" as support structures at the state or federal level to assist clinics with strategy, development, and networking.

Interest in application-oriented AI training is also growing in the outpatient sector. The National Association of Statutory Health Insurance Physicians (KBV) is launching its AI Roadshow to provide doctors and psychotherapists with the basics and legal aspects of AI use. "Artificial intelligence applications offer great opportunities to sustainably improve patient care," said KBV board member Dr. Sibylle Steiner. The series of events is intended to provide practical impetus for the responsible use of AI – from data protection and the EU AI Regulation to liability issues.

The industry has particularly high expectations for generative AI and personalized medicine. Applications that automatically create medical reports or evaluate genetic data could increase the quality and efficiency of care, according to the DKG – provided that ethical standards and data protection are maintained. AI is a tool that can help alleviate the shortage of skilled workers and improve care. However, this requires clear political priorities.

Major hospitals demonstrate how AI can be practically applied in daily clinical practice. For example, the University Hospital Essen, with its Institute for Artificial Intelligence in Medicine (IKIM), operates one of the largest FHIR implementations in Europe. The institute analyzes data from billions of clinical resources and uses AI to automatically structure and make medical documents searchable.

AI has long been part of everyday life at the University Hospital Hamburg-Eppendorf (UKE) as well. Its subsidiary IDM gGmbH has developed, among other things, a KI speech recognition for the medical field with Orpheus and with Argo, a model for the automatic creation of medical reports. Both are used in clinical operations and are intended to be open to other institutions and interested parties.

(mack)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.