Dr. KI: Confidence increases, concerns remain
Many expect intelligent chatbots in medicine to improve diagnosis, while a third are concerned about their use. This is the result of a Bitkom survey.
(Image: Prostock-studio/Shutterstock.com)
The use of artificial intelligence (AI) in decision-making in diagnostics is currently the subject of much discussion. In radiology and pathology, for example, it has already been in use for some time. 71% of respondents are in favor of doctors receiving support from AI "whenever possible", according to a Bitkom survey. In addition, a further 51 percent could imagine asking an AI for a second opinion in the future. Six percent of the more than 1,000 respondents had already asked symptom checker apps or chatbots such as ChatGPT about their ailments.
The majority of Germans (85%) see AI as a great opportunity for medicine, and 69% are in favor of special funding in this regard. Similarly, almost half (47%) would trust AI to make a better diagnosis than humans in certain cases. It is not clear from the survey results which cases these might be.
"Algorithms can analyze huge amounts of medical data, recognize patterns and thus diagnose diseases at an early stage that are sometimes difficult for humans to recognize, especially in the case of rare diseases where experience and routine are lacking," says Bitkom Vice President Christina Raab. According to the survey, 40% are also in favor of the idea of using health data to train AI. However, 79% of respondents also called for strict regulation of the use of AI in medicine.
Videos by heise
35% of respondents also expressed fears regarding the use of AI in medicine. The majority (79%) are in favor of strict regulation of AI in medicine. "What is important is an opportunity-oriented regulatory framework and the inclusion of AI in medical and nursing training," says Raab.
Questions of responsibility much discussed
Responsibility and trust in medical AI systems was also recently discussed at an event organized by the Lower Saxony Medical Association and Hannover Medical School. There, participants discussed the role of AI in the doctor-patient relationship and emphasized the importance of patient autonomy.
In practice, it turns out that AI systems often represent a "black box" and their decision-making is not always comprehensible. This problem was also addressed by Prof. Eva Winkler, Chair of the Central Ethics Commission, who warned against placing too much trust in technology without human review – computer paternalism –.
(mack)