Doctors reject secret AI monitoring app for suicide prevention and Co.

What is AI allowed to do in the doctor-patient relationship? This was explored by participants in the Future Discourse of the Lower Saxony Medical Association.

Save to Pocket listen Print view
Robot drawing on a sheet of paper. It says "My doctor, the AI and me".

The conditions under which AI should and should not be used were discussed at an event organized by the Lower Saxony Medical Association and Hannover Medical School.

(Image: heise online, gezeichnet von Tanja Föhr)

8 min. read
Contents
This article was originally published in German and has been automatically translated.

It is already impossible to imagine medicine without digital support. But where should the journey take us, particularly in the field of generative artificial intelligence, and what undesirable developments need to be avoided? Doctors and other interested parties addressed these questions at an event organized by the Lower Saxony Medical Association and Hannover Medical School (Medizinische Hochschule Hannover, MHH) on future scenarios for artificial intelligence in the doctor-patient relationship.

The future discourse "My doctor, AI and me", supported by the Lower Saxony Ministry of Science and Culture, also addressed the question of responsibility. The overall focus was on how doctors and patients deal with AI. Self-determination and patient trust also played an important role.

Lately, the Lower Saxony Medical Association has dealt with topics such as cyberchondria, AI, the digitalization boom during corona and telemedicine. "The patient-doctor relationship must always remain individual," explained Dr. Martina Wenker, President of the Lower Saxony Medical Association. Doctors are particularly concerned about the topic of AI. One aspect that is particularly significant to Wenker comes from the opinion of the central ethics committee of the German Medical Association on upholding ethical principles in medicine. It states:

'Because medical responsibility is always committed to the individual patient, is subject to strict standards of care as well as the requirement to adapt treatment to the current state of knowledge of medical science and practice.

Accordingly, the doctor is responsible for ensuring that AI serves to improve care.

"In ethics, there is the so-called control dilemma. Once a technology has become widespread in the field, it is usually difficult to regulate it," explained Dr. phil. Frank Ursin from the Institute for Ethics, History and Philosophy of Medicine at Hannover Medical School. Nevertheless, not everything is predictable. The aim of the event was to provide results on questions relating to artificial intelligence in decision-making and what role AI should play. "Is it a real player or just a passive tool?" asked Rusin.

It also needs to be clarified for which problems AI can be the solution and which problems are being bought into. One of the hopes that AI is supposed to help with is "the lack of personnel", said Ursin. Fears include the loss of individuality in care, but also the potential for discrimination. AI must also be trustworthy and explainable. However, generative AI is a black box in which the constantly evolving algorithms are sometimes incomprehensible.

"There is actually no area in medicine where we don't see AI approaches today," explained Martin Schultz, Professor of Technology Management at Kiel University. Schultz has identified "robotics, therapy monitoring and support, risk assessment, self-management, and diagnosis" as fields of application for AI in healthcare. Lifestyle data, socio-demographic data, image data and patient records are available for personalized medicine, which can be merged into a large data pool and evaluated using machine learning. The big question is then: what do I actually do with this data in the care process? It is also necessary to find out how colleagues from different areas feel about the introduction of AI-supported methods.

It is also important to know what decision-making options are available and what role the explainability of AI plays. In a workshop, a central part of the event, the participants, mainly doctors, were confronted with various ethical questions and, above all, the question of what role they play in the use of disruptive technology, generative AI.

There were various fictitious but conceivable future scenarios, which primarily concerned the question of responsibility – when is the patient responsible, when is the doctor responsible and when is the manufacturer responsible. In addition, the question was raised whether the doctor uses AI as an instrument or whether it could, in certain cases, act as an equal to the doctor.

One case study dealt with the doctor's responsibility in the case of a so-called closed-loop system for diabetes, which a patient wants to use on her initiative and in a self-determined manner. An insulin pump automatically injects insulin to keep the blood sugar level constant. But what role does the doctor play if the patient's lifestyle changes radically and data is automatically transmitted to the doctor who shows that the patient is no longer taking sufficient care of her health? The doctor points out to the patient that the AI system cannot compensate for her lifestyle. From the workshop participants' perspective, the patient should still be allowed to lead a self-determined life.

In an extension of the scenario, the patient begins to interact with a chatbot and ask it questions. This could lead to a dependency on the chatbot, but also to other conflict situations, for example with the doctor. It may also be that the patient no longer wants to use the closed-loop system and wants to be monitored.

In these and similar scenarios, various other questions also arose –, such as the availability of the doctor outside regular consultation hours. Whether the manufacturer should provide a support hotline that medical experts can also access, for example via a connected telemedicine center.

In another fictitious, but not unrealistic scenario, the question of what doctors would think of an app that monitors the patient unnoticed was particularly exciting. As soon as a suicide risk is detected, the app sounds the alarm. All participants answered intuitively: "That's not possible" and referred to the patient's right to self-determination.

Excerpt from the workshop material for the future discourse "My doctor, AI and me". Suicidal Max doesn't know that an app is monitoring him. Doctors at the workshop don't like this.

(Image: heise online/ Zeichnung von Tanja Föhr)

The question was also raised whether the doctor should always be available. It would also be difficult if this warning was passed on to other healthcare professionals, and it was also unclear whether the doctor would then have to call the police.

How would doctors decide if the AI had a different opinion? In another case, an AI system had already made a misdiagnosis that led to the patient going blind. Due to such possible errors, the workshop participants believe it is particularly important that the doctor checks the plausibility of the decision and, if in doubt, asks the AI and does not simply accept the AI's decision without questioning it. Most doctors would ask an experienced colleague for advice. A further proportion would discuss the disagreement between the AI and the doctor openly with the patient and look for a solution together.

The question whether an AI could be elevated to the level of a doctor and thus be regarded as a second opinion was also discussed. This is possible in radiology, for example. However, all AI methods should be subject to greater legal supervision than already established methods, and it must be clear which tasks AI can and cannot take on. It must also be clarified where certifications and controls as well as service and advice for AI systems are necessary. However, the autonomy of the patient must be paramount in all medical activities.

Although big tech companies such as Microsoft in particular advertise the fact that doctors will have more time for their patients and be able to show them more empathy by reducing bureaucracy. However, it is likely that more cases will be processed instead. On this point, all doctors were certain. In all the scenarios, the participants emphasized that doctors would like to have more time and therefore more empathy for their patients, but that the completely overburdened healthcare system does not allow this.

(mack)