Rampage in Canada:OpenAI had indications of possible danger from the perpetrator

In Canada, a trans woman killed eight people. Months prior, she made alarming requests to ChatGPT, which were discussed at OpenAI.

listen Print view
Logo and name of OpenAI on a smartphone, with enormously enlarged red pixels in the background

(Image: Camilo Concha/Shutterstock.com)

2 min. read

After the rampage in the Canadian community of Tumbler Ridge, the company OpenAI admitted to having detected conspicuous behavior of the mentally ill perpetrator in chats with ChatGPT months before the act. The trans woman had written about gun violence over several days in June 2025, and these chats were flagged by automatic systems at OpenAI, the Wall Street Journal (WSJ) reports, citing people familiar with the matter.

Employees then discussed reporting the incident to the authorities; it is further stated. Ultimately, however, management decided against it. OpenAI did delete the account but saw no reason to involve the authorities, a spokeswoman told the WSJ. The chat logs did not indicate any “credible and imminent threat to the physical safety of others.”

On February 10, the trans woman killed a total of eight people in Tumbler Ridge in the western Canadian province of British Columbia, most of them schoolchildren aged 11 to 13. The 18-year-old then killed herself. More than 20 people were also injured in the rampage. The act is thus one of the bloodiest in Canadian history. The affected community of Tumbler Ridge has just around 2400 inhabitants and is located on the eastern slope of the Rocky Mountains, more than 1000 kilometers north of Vancouver.

Videos by heise

According to the report by the Wall Street Journal, OpenAI only contacted the Canadian police after the rampage became known. The company is supporting the investigation, said the spokeswoman. The AI models are trained to prevent users from causing harm in the real world. In the specific case, the chat logs were automatically found by a review system and reported internally. When dealing with such information, the risk of violence is always weighed against data protection concerns, the newspaper quotes. OpenAI also pointed out how potentially burdensome “necessary police involvement” could be for individuals and families.

(mho)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.