OpenAI: Five operations for covert influence operations using AI prevented

Using AI tools from OpenAI, officials from Russia and other countries have attempted to exert covert influence online. OpenAI prevented this.

Save to Pocket listen Print view
ChatGPT-App auf einem Smartphone

(Image: Tada Images/Shutterstock.com)

2 min. read
This article was originally published in German and has been automatically translated.

According to OpenAI, it has dismantled five covert operations in the past three months in which actors from Russia, China, Iran, and Israel are said to have made use of the company's AI technology. The US company has now announced this. According to the report, OpenAI observed that those responsible had mainly text, but also images in some cases, generated by AI in quantities that would not have been possible for humans. The actors would have mixed the generated content with other content and then distributed it on various sites on the internet. However, the posts would not have received much attention. These were therefore attempts to influence debates on the war in Ukraine, the war in Gaza and the elections in India and thus exert political influence.

Overall, OpenAI attributes the detected attempts to exert influence in two cases to those responsible in Russia. According to OpenAI, one of the operations was carried out from each of the other countries. According to the summary, AI models from OpenAI were used, among other things, to automatically generate "political comments" for Telegram and multilingual posts for various sites. In one case, the AI technology was also used for programming tasks. The unknown persons responsible also used the AI technology to conceal their identity. OpenAI does not provide any more detailed information on this either. None of the five campaigns were successful.

The AI company explains that it wants to ensure greater transparency around the misuse of generated content with this disclosure. In a detailed report, OpenAI explains how the operations were discovered and also lists examples of AI-generated content. The company also assures that it is working on measures to make such campaigns more difficult. The company has already observed that its AI models have "repeatedly" refused requests from the actors. At the same time, the company's own AI technology would make it possible to complete investigations into such operations much more quickly than was previously possible. The company is also committed to taking action against such operations in the future.

(mho)