ChatGPT introduces optional trusted contact for mental health crises
Adult ChatGPT users can now designate a trusted contact for crisis situations. A similar feature was previously only available for minors.
(Image: Tada Images / Shutterstock.com)
OpenAI is introducing an optional safety feature for adult ChatGPT users. They can designate a trusted contact who will be notified if chatbot conversations about self-harm indicate a serious risk.
The trusted contact must be of legal age and will receive an invitation explaining their role. If they decline, the user can name another adult.
If the system detects chatbot conversations that indicate self-harm, ChatGPT will inform users that the trusted contact may be notified by OpenAI and encourage them to seek a conversation themselves. A specially trained team will then review the case. If they conclude that there is an acute risk, the trusted contact will be notified via email, SMS, or directly in the ChatGPT app, provided they have a corresponding account.
Videos by heise
Chat content or transcripts will not be shared in the notification. Instead, it will contain a general notice that self-harm was discussed in a potentially concerning manner and an encouragement to actively reach out to the user.
Users can change or remove their trusted contact at any time in the settings, and the trusted contact can also remove themselves at any time. According to the announcement, OpenAI aims to review safety alerts in less than an hour.
On its support page, OpenAI states that the feature is available in most countries and regions for adults aged 18 and over. It is only enabled for personal ChatGPT accounts.
OpenAI responds to growing pressure
With this new feature, OpenAI is extending a similar safety function for minors to adults. This stems from youth protection measures introduced in September 2025. These, in turn, followed a lawsuit by the parents of a 16-year-old who took his life in April 2025. The parents accuse OpenAI of not having adequate safeguards in ChatGPT and of reinforcing the teenager's suicidal thoughts. OpenAI rejected the accusations and viewed the case as misuse of ChatGPT, as the teenager allegedly circumvented the chatbot's safety measures.
There are other lawsuits of this kind against OpenAI in the USA. The company is not alone with such accusations: Other providers have also been sued by relatives whose children allegedly injured themselves or took their lives after long chatbot conversations.
Meanwhile, such accusations increasingly concern adults as well as cases where mental crises can be dangerous not only for those affected: For example, the father of an adult Gemini user is suing Google because the chatbot allegedly involved his son in a delusional relationship with an AI persona, encouraged armed attacks, and contributed to his suicide. In Florida, it is also being investigated whether ChatGPT helped prepare a deadly university attack.
(mki)