OpenAI: Suicide assistance is misuse of ChatGPT

After a 16-year-old committed suicide, his parents are suing OpenAI for complicity. However, the company believes its guidelines were violated.

listen Print view
Chatgpt,Chat,With,Ai,Or,Artificial,Intelligence.,Young,Businessman,Chatting

(Image: CHUAN CHUAN/Shutterstock.com)

4 min. read

The problem, as is well known, always lies with others. This is also the case with a 16-year-old from the USA who committed suicide. The teenager had previously had extensive conversations with ChatGPT. Among other things, the chatbot offered to help write a suicide note. The teenager's parents have sued OpenAI because of the chats. It is about complicity in suicide and the lack of adequate safety measures for the chatbot. However, OpenAI now claims that the 16-year-old misused the chatbot.

Therefore, OpenAI does not see itself as responsible, the company writes in a blog post in which they specifically address the case. After all, OpenAI also writes that they have to release information about the case for their defense, which apparently makes them uncomfortable – or rather, they seem to realize that this can hurt people and relatives. But the court must receive all information. According to this, this is not the case in the indictment.

As, among others, TechCrunch reports, the teenager is said to have communicated intensively with ChatGPT for more than nine months. The chatbot is said to have warned him to seek help more than 100 times. And the parents' lawsuit also states that the boy was able to circumvent the safety precautions, so that ChatGPT provided him with "technical specifications for everything from overdose to drowning to carbon monoxide poisoning." ChatGPT is said to have written about "beautiful suicide."

Videos by heise

OpenAI is now taking advantage of this and says that since the 16-year-old circumvented the safety measures, he violated the terms of use, and therefore the company is no longer responsible. Furthermore, OpenAI refers to the fact that the FAQ for ChatGPT also states that one should not rely on what the chatbot says.

In the USA, there are further lawsuits against OpenAI because teenagers have injured themselves after long conversations with ChatGPT. One problem is that AI chatbots are designed to reinforce users. They support any behavior. This is partly due to the architecture of the underlying AI models. They learn through positive reinforcement, which generally leads to better answers but also more problematic ones.

OpenAI has already announced a series of improvements to ChatGPT. The chatbot is intended to react more sensitively to requests indicating mental health issues. This means that OpenAI cannot completely deny co-responsibility after all.

Other providers are also working on the safety of their chatbots, especially for young people and people with difficulties. Meta AI, for example, is said to have been somewhat too liberal with sexual content. In the USA, there are therefore already investigations into how harmful AI chatbots can be.

Note: In Germany, you can find help and support for issues of all kinds, including issues related to bullying and suicide, at telefonseelsorge.de and by phone at 0800 1110111. The "Nummer gegen Kummer" (children's and youth helpline) can be reached at 116 111. In Austria, there are also free support services, including the child emergency hotline 0800 567 567 and "Rat auf Draht" at 147, especially for children. The same phone number in Switzerland leads to Pro Juventute.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.