Precedent: US Agency Identifies Darknet Admin with ChatGPT Data

The first known seizure order for user data against OpenAI reveals: US authorities can request prompt inputs for criminal prosecution.

listen Print view

(Image: Gorodenkoff / Shutterstock.com)

4 min. read

The US Department of Homeland Security (DHS) has ordered OpenAI, via a seizure order, to release user data and prompts for ChatGPT requests. According to Forbes, this is a precedent from the state of Maine, as no similar orders have been known to date. The department's action shows that US law enforcement agencies are now also using generative AI platforms to collect evidence.

Special investigators from Homeland Security Investigations (HSI), a unit of the DHS's Immigration and Customs Enforcement (ICE), were already searching for the identity of a suspected operator or moderator of at least 15 darknet forums since 2019. These forums also contained depictions of child sexual abuse material (CSAM) and were operated on Tor. The sites had at least 300,000 users.

The breakthrough only came when an undercover investigator chatted with the administrator in one of the CSAM forums. The suspect mentioned using ChatGPT. He even revealed some of his prompts and the answers he received. The shared requests initially seemed harmless. One read: "What would happen if Sherlock Holmes met Q from Star Trek?" A second was about a long poem, after which the suspect received verses in "Trump style" about his love for the song Y.M.C.A. by Village People as requested and copied them.

DHS employees used this information to order OpenAI to release a series of data. This included details of all other of the user's conversations with ChatGPT, as well as names, addresses, and all payment data associated with the account.

This process represents the first documented use of a "Reverse AI Prompt Request." Previously, similar requests were only known from search engines like Google, where police requested user data based on specific search terms. It is not the content of the prompts that is now decisive, but the metadata. Investigators used the chatbot inputs as a digital fingerprint to match the anonymous online identity with the real person.

Videos by heise

The seizure order did force OpenAI to release relevant data. The ChatGPT operator provided an Excel spreadsheet for this. However, the DHS was ultimately not reliant on it. Rather, the suspect himself revealed in the forum chats that he was undergoing medical examinations, had lived in Germany for seven years, and his father had served in Afghanistan.

The investigators were thus able to identify the suspect as Drew H., 36, who had worked at Ramstein Air Force Base and applied for a position at the Pentagon. The authorities charged him with conspiracy to promote and distribute abusive material.

Jennifer Lynch of the Electronic Frontier Foundation (EFF) urged Forbes that AI companies urgently need to limit the amount of user data collected. OpenAI itself states that it reported 31,500 CSAM-related content to the National Center for Missing and Exploited Children (NCMEC) between July and December of last year and received 71 requests for disclosure of user information or content. The company provided data for 132 accounts.

For criminal lawyer Jens Ferner, the case shows that by using chatbots, "entire behavior can be evaluated and a likeness of the personality created using AI." AI queries by investigators create "possibilities at the level of profiling and DNA traces." OpenAI and other manufacturers must review their data protection policies and make transparent under what conditions they release what. The danger that chatbots will become surveillance instruments is real.

(nie)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.