Cyberattack? OpenAI investigates potential leak of 20 million users' data
Cybercriminals claim to have stolen private data from millions of OpenAI accounts. Researchers are skeptical, the ChatGPT manufacturer is investigating the case
Artificial intelligence is based on algorithms. The machine should make informed decisions.
(Image: dpa, Felix Kästle)
Has OpenAI made a massive mess of cyber security? Criminals boast in a darknet forum that they have stolen and passed on millions of ChatGPT users' login details. Threat actors often make exaggerated claims in such underground marketplaces in order to attract attention or lure buyers. However, the potential scale of the advertised data leak would be huge, so that alarm bells are ringing among IT security experts. "We take these allegations seriously," an OpenAI spokesperson told Decrypt magazine. The case is currently being investigated.
According to the portal GbHackers, the darknet user operating under the pseudonym emirking posted a cryptic message in Russian in which he advertised "more than 20 million access codes to OpenAI accounts". He spoke of a "gold mine" and offered potential buyers sample data with email addresses and passwords. According to the report, the complete data set was for sale "for just a few dollars". At OpenAI, a mass review of accounts is probably pending.
"We have not seen any evidence that this is connected to a compromise of OpenAI systems to date", the ChatGPT manufacturer said in an initial response. Security experts are also unsure whether the personal data of chatbot users has been compromised.
Videos by heise
Attacker had access to internal Slack messages
Mikael Thalen from Daily Dot announced that he had found invalid email addresses in the alleged sample. There is no evidence so far "that this alleged OpenAI intrusion is legitimate". At least two addresses did not work. The user's only other post in the forum was a reference to malware for stealing login data. However, the thread has since been deleted.
According to Decrypt, OpenAI does not have a particularly good record in the area of IT security. If the claims are legitimate, it would be the third major security incident for the AI company since the release of ChatGPT. Last year, a cyberattacker gained access to the company's internal Slack messaging system. According to the New York Times, he allegedly got his hands on details about the design of OpenAI's AI technologies. Previously, in 2023, a simple bug is said to have enabled cybercriminals to access the private data of the company's paying customers.
The damage potential of the data leak under investigation would be huge: millions of OpenAI users worldwide rely on the platform and tools such as ChatGPT and GPT-based integrations, some of which contain confidential content. Hacked accounts could expose sensitive company projects or critical communication.
(emw)