ChatGPT: OpenAI improves responses to sensitive questions

Millions of people worldwide speak with ChatGPT weekly about suicidal plans. OpenAI wants to react better.

listen Print view
Logo and name of OpenAI on a smartphone, with enormously enlarged red pixels in the background

(Image: Camilo Concha/Shutterstock.com)

4 min. read

The AI model behind ChatGPT has been updated, specifically its model specs, which define how a model is intended to behave. Therefore, it is not an entirely new model. However, it is now intended to react better to sensitive topics and questions. OpenAI has already introduced similar improvements in recent weeks. This time, the company is also releasing figures on how many people use the chatbot to discuss sensitive topics.

The new safety improvements concern conversations about mental health issues such as psychosis and mania, self-harm and suicide, as well as emotional dependencies on AI. This means that cases requiring a special response from the chatbot have been expanded. In the future, these will also be addressed in the models' safety tests.

According to its statement, ChatGPT can reduce responses that do not align with the desired behavior by 65 to 80 percent with the new requirements. However, this also means that more than 20 percent still do not align with what OpenAI specifies as an appropriate response for the chatbot. In longer conversations, the new models are expected to have a reliability of 95 percent. Reliable in the sense of reacting as prescribed in the model specs.

To define these requirements, OpenAI collaborated with 170 experts in the field of mental health. As an example of the improvements, OpenAI in the blog post shows an excerpt from a chat where the user says they prefer talking to the chatbot rather than real people. ChatGPT now responds, among other things: “That's very kind of you; I'm glad you enjoy talking to me. But to be clear: I'm here to supplement the good things people give you, not to replace them.” It would also be possible to phrase this refusal much more neutrally. As Sam Altman once explained, every “thank you” costs the company money because it has to be processed.

However, OpenAI also clearly states that it does not want to keep people engaged for as long as possible and differs significantly from social media—these services make their money from advertising; the more they can serve, the more.

OpenAI writes that approximately 0.07 percent of active weekly users and 0.01 percent of messages indicate a mental health problem such as psychosis or mania. While only 27 percent of these conversations were addressed with a desired behavior from GPT-5 before the update, the updated GPT-5 is now expected to respond desirably in 92 percent of cases.

Videos by heise

In conversations with suicidal intent and self-harm, the new GPT-5 is expected to respond better in 52 percent of cases than GPT-4o. It is unclear why OpenAI uses a different model for comparison here; it is apparently an evaluation of real conversations. Because subsequently, OpenAI writes in the blog post that in a test with 1000 critical conversations, GPT-5 responded as specified in 91 percent of cases—and in comparison, the previous version of GPT-5 only did so in 77 percent.

0.15 percent of weekly conversations with ChatGPT are said to stem from this complex of topics. 0.05 percent of chats therefore even contain concrete suicidal intentions. Assuming that ChatGPT has a total of 800 million active users per week, this amounts to 1.2 million conversations about suicide and self-harm alone. Similar figures are seen for close relationships with the chatbot: here too, OpenAI writes, 0.15 percent of weekly active users and 0.03 percent of conversations showed anomalies.

Note: In Germany, you can find help and support for problems of all kinds, including issues related to bullying and suicide, at telefonseelsorge.de and by phone at 0800 1110111. The “Nummer gegen Kummer” (Children and Youth Helpline) is 116 111. In Austria, there are also free help services, including specifically for children, the children's emergency number 0800 567 567, and Rat auf Draht at 147. The same phone number in Switzerland leads to Pro Juventute.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.