Analyses: Two to three million sexualized deepfakes generated by Grok

Criticism of AI-generated deepfakes from Grok was significant, but the full extent was previously unclear. Now, two organizations counted for the first time.

listen Print view
Grok app on a smartphone

(Image: Ascannio/Shutterstock.com)

2 min. read

The AI chatbot Grok generated between 1.8 and 3 million sexualized deepfakes in less than two weeks, primarily of women – but also men and children. This is the result of two independent analyses by The New York Times and the non-profit organization Center for Countering Digital Hate (CCDH), which have now been presented. Both analyzed sample sets of the more than four million images that Grok publicly generated over the New Year period. The US newspaper suggests that a conservative interpretation of the results would show 1.8 million of them depicting sexualized images of women, while the CCDH estimates the number of sexualized depictions of all genders to be over three million. More than 23,000 would reportedly show children.

The CCDH has explained its methodology in more detail, stating that the sample was analyzed with AI assistance. In the presentation of the results, examples are given for the type of sexualized depictions. The organization also lists that Selena Gomez, Taylor Swift, Billie Eilish, Ariana Grande, Ice Spice, Nicki Minaj, Christina Hendricks, Millie Bobby Brown, and Kamala Harris were recognizable in the photorealistic images, among others. The researchers also found several examples of underage actresses depicted in extremely small bikinis. A student's selfie was also digitally “undressed,” and the image remained viewable days later. This also applied to numerous other findings.

Videos by heise

The two analyses indicate the extent to which sexualized deepfakes on X have proliferated within a few days. They triggered international criticism of Grok, the responsible company xAI, the microblogging service X, and its owner Elon Musk. The billionaire initially denied the problem, then restricted image generation to paid accounts, and finally assured that the function would be completely deactivated. However, how rigorously this will be implemented is unclear. The CCDH analysis now suggests that at times, 190 photorealistic sexualized deepfakes were created per minute.

(mho)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.