Analysis: At times, thousands of sexualized deepfakes per hour from Grok
A researcher has counted how many sexualized deepfakes Grok generates per hour. In addition, there are insights into what the AI does not make public.
(Image: Talukdar David/Shutterstock.com)
The AI chatbot Grok was recently used to publicly create thousands of sexualized deepfakes on X every hour, primarily of women – almost 100 times as many such images as on five other platforms used for this purpose combined. This was revealed by a 24-hour analysis by a deepfake researcher, the results of which were summarized by the news agency Bloomberg. This now provides concrete figures for the problem that has been discussed for days. Meanwhile, the US magazine Wired has pointed out that content generated by the chatbot from Elon Musk's AI company on its website is even more problematic. This is indicated by content indexed by Google and discoverable in search results that users have created. Actually, this content is not public.
Grok Continues
As Bloomberg explains, researcher Genevieve Oh analyzed Grok's responses on the X account over a 24-hour period early this week for her analysis. The extent of the sexual fake images generated publicly there, and in most cases without the consent of those depicted, is “unprecedented,” the news agency quotes a lawyer familiar with the issue. Until now, there has been no technology that makes the generation of deceptively realistic but false images of almost naked individuals so easy and at the same time so widespread. On the microblogging service X, you can send the chatbot a photo and ask for a version in which the person shown has almost no clothes on. Grok continues to comply with this.
Grok's sexualized deepfakes have been causing outrage worldwide for days. Users are even using the AI account to digitally “undress” photos of minors. The creation of these sexualized images is possible without the consent of the affected individuals, and the results are publicly visible on the microblogging service. Although X has spread the claim through the AI account that these were only “isolated cases” and a causal “failure of security measures” has been fixed, the chatbot has not stopped generating them. Several states and the EU have heavily criticized this and promised countermeasures; however, there have been no actual consequences yet.
Videos by heise
Meanwhile, US magazine Wired has discovered that a loophole on Grok's website reveals even more disturbing content that users have generated there. They apparently assume this is not public. However, it became public in the summer that AI content generated there is indexed by Google if it is distributed via messengers using the “share” button. This content includes photorealistic AI-generated video clips with sexualized deepfakes, to which blood is also added. Clips that apparently show prominent women are also included. There are even short films that apparently show minors, as the AI forensics organization AI Forensics has found out. Such content can be found directly via Google.
“Free Fall into the Abyss of Depravity”
“It feels like we've fallen off a cliff and are now in free fall into the abyss of human depravity,” Wired quotes the assessment of law professor Clare McGlynn, who specializes in image-based abuse. “Inhumane impulses” of some people are encouraged by the technology without safeguards or ethical boundaries. It is unclear whether and what measures prevent the AI generation of illegal pornography – for example, related to abuse. On X, one at least has to log in to see certain content, while on Grok there is apparently no access control at all. At the same time, Wired has found internet forums where people discuss how to get the Grok AI to generate any content.
(mho)