Media report: Sexualized deepfakes from Grok possibly calculated
The creation of sexual AI images with Grok could be a method to increase the popularity of the platform X. The Washington Post provides indications.
(Image: lilgrapher/Shutterstock.com)
The sexualized deepfakes from xAI's AI chatbot Grok, which caused outrage worldwide, may have been intentionally generated to make the social network X more interesting. This is suggested by documents and statements from half a dozen former employees of the company, which are available to the US daily newspaper Washington Post.
According to the extensive newspaper report, members of xAI's Human Data Team, who were hired to shape Grok's responses to users, received a surprising waiver from their employer in the spring of last year. In it, they were asked to commit to working with obscene content, including sexual content. Perhaps the company is now willing to produce any content that could attract and retain users, was the fear at the time.
Since the head of the parent company X, Elon Musk, gave up his position as head of the US efficiency agency DOGE in May, he has been pushing to increase Grok's popularity, according to two employees. At X, the social media platform formerly known as Twitter and bought by Musk in 2022, security teams repeatedly warned management in meetings and messages that their AI tools could enable users to create sexual AI images of children or celebrities. Furthermore, xAI's AI safety team, responsible for preventing serious harm to users through the app, consisted of only two or three people for most of last year, the insiders interviewed by the paper explained. Competitors like OpenAI and other companies employ several dozen people in this area.
Lax Restrictions
According to the Washington Post, the largest AI companies also typically have strict regulations for the creation or editing of AI images and videos. This is precisely to prevent users from creating material about child sexual abuse or fake content about celebrities. However, in December, xAI integrated its editing tools into X, enabling all registered users to create AI images. This, in turn, led to an unprecedented spread of sexualized images, according to David Thiel, former Chief Technology Officer of the Stanford Internet Observatory, which investigates abuse in information technology. Grok's functionality is completely different from that of other AI image editing services, Thiel told the Washington Post. When asked for comment, neither X, Musk, nor xAI responded.
In fact, X users increasingly used the image editing function of the generative AI system Grok to digitally undress women and even minors and generate sexualized versions of the recordings. These deepfakes were publicly posted on X, leading to strong global outrage. Although X claimed that these were only “isolated cases” and a causal “failure of security measures” had been fixed, the chatbot did not stop generating such content. Several states and the EU criticized this massively and promised countermeasures. The Attorney General of California, the UK's communications regulator, and the European Commission initiated investigations against xAI, X, or Grok for these functions.
Videos by heise
In at least one respect, however, Musk's push has worked for the company, the Washington Post writes. The controversy surrounding the undressing drew public attention to Grok and X. Alongside OpenAI's ChatGPT and Google's Gemini, Grok subsequently climbed into the top 10 of Apple's App Store. The average number of daily app downloads for the AI chatbot increased worldwide by 72 percent in the first three weeks of January, when the scandal became public, compared to the same period in December. This is according to data from market research firm Sensor Tower.
New Employees for Security Teams
Following the global outcry over the Grok scandal, xAI has made efforts to recruit more employees for its AI safety team and published job advertisements for new safety-oriented positions, according to the Washington Post. The parent company X, in turn, announced that it would prevent users in “countries where such content is illegal” from creating images of real people in bikinis, underwear, and other revealing clothing; xAI would do the same in the Grok app. However, US users were still able to create such images in the Grok app after this announcement, the newspaper found. The xAI chatbot also still willingly undresses men and produces intimate images on request, as the US tech portal The Verge has found out.
In the summer of last year, Grok caused a stir with antisemitic outbursts. Poland's government then called on the EU Commission to investigate possible violations of the European Digital Services Act (DSA).
(akn)