OpenAI reads ChatGPT in case of doubt with – and has a connection to authorities

Communication with ChatGPT that indicates that you could hurt others ends up with fellow readers and law enforcement officers.

listen Print view
Someone is sitting in front of the open laptop.

(Image: CHUAN CHUAN/Shutterstock.com)

3 min. read

Following a tragic incident in which a teenager took his life, for which the parents now hold ChatGPT partly responsible, OpenAI has taken a stand in a blog post. In it, they announce new safety measures and explain what should not have happened in the first place. However, the company also reveals that, in case of doubt, they are apparently always reading.

Specifically, it says: “When we discover users who intend to harm others, we forward their conversations to specialized pipelines where they are reviewed by a small team trained in our Acceptable Use Policy and authorized to take action, including blocking accounts. If human reviewers determine that a case poses an imminent danger to others, we may refer it to law enforcement.”

heise online has asked OpenAI whether this means that all calls will be scanned—whether for a fee with a subscription and the concession that calls will not be used for AI training, for example, or for free. However, the answer is still pending. It is also questionable which law enforcement authorities are involved. Is it the one at the user's headquarters or the one at OpenAI's headquarters? The former would actually only be possible with a location release.

People who talk about harming themselves in chat remain under the radar. OpenAI writes in the blog post: “We currently do not report cases of self-harm to law enforcement to respect people's privacy, as interactions with ChatGPT are particularly private.” However, this sentence also suggests that OpenAI is at least aware of this. Normally, ChatGPT is intended to steer conversations in a helpful direction. This means, for example, displaying offers of help.

Videos by heise

However, OpenAI itself also explains that the security measures can still be improved. Long conversations, for example, apparently make it easier for ChatGPT to forget what to do. “ChatGPT may initially correctly refer to a suicide hotline when someone first expresses suicidal intentions, but after many messages over a long period of time, there could eventually be a response that goes against our safety precautions.”

In addition to further improvements to the systems, a parent mode is also planned.

Note: In Germany, you can find help and support for problems of all kinds, including questions about bullying and suicide, at telefonseelsorge.de and by calling 0800 1110111. The number against grief (children's and youth helpline) is 116 111. In Austria, there are also free help services, including the children's helpline on 0800 567 567 and Rat auf Draht on 147, especially for children. The same telephone number in Switzerland leads to Pro Juventute.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.