Complicity in suicide: Parents sue OpenAI in the USA

After the suicide of their 16-year-old son, parents sue OpenAI. ChatGPT is said to have helped them write a suicide note.

listen Print view
Artificial intelligence and the law

What happens when autonomous machines make fatal mistakes?

(Image: dpa, Hauke-Christian Dittrich)

4 min. read

A 16-year-old boy from California took his life in April. Beforehand, he had apparently engaged in extensive conversations with ChatGPT. The boy's parents read the chat logs and subsequently sued OpenAI and its CEO Sam Altman. This is not the first case of parents suing an AI provider because their child committed suicide.

The chat transcripts reported in the New York Times show that ChatGPT allegedly offered to write a suicide note to the 16-year-old. This alone shows how suicidal the boy was. This knowledge should have led to ChatGPT offering contact details for aid organizations. Instead, the AI chatbot recommended methods of suicide. Relatives were barely able to get in touch with the teenager, and the chatbot apparently became a close confidant.

As the US broadcaster CNN reports, ChatGPT is even said to have prevented the 16-year-old from actually leaving a noose in his room so that someone would become aware of his plans. The chatbot, on the other hand, encouraged all harmful and self-destructive thoughts, according to the lawsuit.

In fact, this is the primary mode of operation of a chatbot. They are more about reinforcing and being friendly. According to the manufacturer's specifications, the primary goal is to be helpful to users.

Videos by heise

The parents are demanding compensation, but above all an order to prevent something like this from happening again. They accuse OpenAI of not having taken sufficient safety precautions and of being out to maximize profits.

Following the accusations, OpenAI published a statement – describing a recent incident as "heartbreaking". "Our goal is not to hold people's attention. Success is not measured by how much time someone spends with the chatbot – as is usual with social media. It's about being helpful. For cases like that of the 16-year-old, there are even several safety precautions. Self-harming behavior should not be supported; instead, the system aims to suggest offers of help. To take further measures, the system works together with numerous experts. Nevertheless, it appears that no measures were taken in this case.

The recent change in the AI models behind the chatbot also shows how connected people can be with ChatGPT. When OpenAI upgraded from GPT-4o to GPT-5, many people complained that the relationships they had with the chatbot had changed. So much so that many are talking about AI relationships, as can be read on Reddit, for example. OpenAI has made the GPT-4o model available again.

"As ChatGPT has become more widely used around the world, we have observed that people are using it not only for search queries, programming and texting, but also for very personal decisions, including life advice, coaching and support," the statement says.

The provider of AI personas, Character.ai, is being sued in the USA because a teenager committed suicide and had previously discussed this with a chatbot. The chatbot is also said to have supported the plans. The parents are also suing here, reports BR.

Note: In Germany, you can find help and support for problems of all kinds, including questions about bullying and suicide, at telefonseelsorge.de and by calling 0800 1110111. The number against grief (Kinder- und Jugendtelefon) is 116 111. In Austria, there are also free help services, including the children's helpline on 0800 567 567 and Rat auf Draht on 147, especially for children. The same telephone number in Switzerland leads to Pro Juventute.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.