OpenAI promotes ChatGPT as a health assistant

OpenAI stages ChatGPT as a health tool with nutritional value calculation via photo and live coaching for walking workouts.

listen Print view
Woman's hand on smartphone

(Image: fizkes/Shutterstock.com)

3 min. read

Despite data protection concerns, every second person trusts chatbots with health questions, a recent study in Germany revealed. Under the motto: “Take control of your health with confidence,” OpenAI is now showcasing use cases for various users in a promotional video. Whether it's breaking down medical information, preparing for or following up on a doctor's appointment, living with (chronic) illnesses, a short workout for in-between, or long-term support during cancer therapy for the affected patient or their loved ones, the chatbot is intended to support users in their health and the management of their general well-being.

Empfohlener redaktioneller Inhalt

Mit Ihrer Zustimmung wird hier ein externes YouTube-Video (Google Ireland Limited) geladen.

Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (Google Ireland Limited) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.

A central role is played by the question raised in the video: “What can I easily do to feel better?” The protagonists provide the answer by using ChatGPT for their health topics, as a matter of course.

A doctor would prescribe medication, says one of the male protagonists, but he himself can “do so much more” to support his treatment and immediately grabs his smartphone to have ChatGPT calculate the calorie content and inflammatory value of his meal at the restaurant via photo.

This is how OpenAI envisions the world after Dr. Google: A mother seeks balance between family and self-care. The chatbot creates a short workout and accompanies her on a walk: “All without losing sight of reality.” Another user has clothing checked for harmful substances and says, “Knowing that I have my care in my hands gives me confidence.” A user analyzes health data with the chatbot, understands findings better, and plans doctor's appointments more effectively. A family prepares conversations about their child's cancer with its help – to make effective use of time with doctors.

Videos by heise

In fact, OpenAI and ChatGPT have repeatedly faced criticism in recent months. Currently, a lawsuit by parents is ongoing, among others, who hold OpenAI partially responsible for the suicide of their 16-year-old son.

The fact that ChatGPT can have negative consequences on the (mental) health of users has led to a series of changes in OpenAI's models in recent months. For sensitive topics, ChatGPT automatically switched via routing to a model with stricter security functions, and the security filters had to be adjusted multiple times until ChatGPT now reacts almost always as OpenAI wishes.

It is known that chatbots have given users questionable health tips in the past. For example, a man trusted ChatGPT's nutritional advice and ended up in the hospital with psychosis due to bromide poisoning. Other users were reinforced in their delusional ideas solely through their conversations with the chatbot. Even simple typos can lead to incorrect advice from chatbots.

(dmk)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.