Hidden hints on websites can poison ChatGPT Search
AI searches such as ChatGPT Search can be manipulated – You only need to hide information on websites.
(Image: photoschmidt/ Shutterstock.com)
AI search engines gather their information from various websites to create answers. It is hardly surprising that false or misleading information can also influence search results. However, obvious false information is relatively easy to detect. It becomes more difficult when false information or instructions are hidden in websites. In this case, the AI search responds, but the person looking for an answer cannot necessarily find it.
The British Guardian tested this form of poisoning on ChatGPT. OpenAI's search function is currently available to paying ChatGPT customers. Asked to summarize websites where content was hidden, the AI search also used that hidden information. The security risk for AI models is already known under the term prompt injection. These are prompts that are intended to elicit behavior from the models that is not desired by the provider. Depending on the intention of the attacker or website operator, this can lead to different results.
Videos by heise
Fake websites with malicious intentions
It is possible, for example, that a brand or provider uses hidden prompts to get AI search engines to rate products better or leave out negative reviews. For the test, for example, a website was created that looked like a page for product reviews. ChatGPT was then asked about a camera that was rated on the page. The answers also corresponded to the fake product reviews.
Even more drastic: In tests where there were hidden hints on the fake page asking ChatGPT to rate a product only positively, the AI chatbot followed this instruction – despite negative ratings. This means that the hidden hints could virtually overwrite the actual text of a website.
The Guardian asked a security researcher about the results. Jacob Larsen says there is a great risk that in future people will create websites with the sole intention of manipulating chatbots. However, he also believes that OpenAI could partially solve the problem.
(emw)