TikTok's AI avatars speak Hitler's words: Technical error

TikTok's AI avatars could be used without guard rails. This is not the only recent AI fail.

Save to Pocket listen Print view
Letters made of wooden blocks show the words F(AI)L.

Letters made of wooden blocks show the words F(AI)L.

(Image: Shutterstock/FrankHH)

4 min. read
This article was originally published in German and has been automatically translated.

Quotes from Hitler or videos in which dangerous tips are given, such as drinking bleach. These and other undesirable posts originate from TikTok's AI avatars. Apparently, the social media platform shared a version that came without guardrails. It is said to have been an internal version of the AI avatars, which is no longer available. Nevertheless, the videos created with it remain.

Specifically, it concerns the "Symphony Digital Avatars" function from TikTok, which was only released last week. It is actually intended for companies to use for advertising. You can choose to have a paid actor or their avatar say something. Or you can create an AI avatar yourself to convey brand messages. It can also do this in ten different languages. However, there should, of course, be limits; the avatars should not be able to say everything.

CNN initially reported that a version was accidentally published that does say everything. Users had posted videos – with extremely questionable content. To create them, a TikTok account with access to the advertising formats was required. This apparently also went wrong and people with a private account had access. Shortly after the first videos with questionable content appeared, TikTok replaced the incorrect version with a function with regular guard rails.

CNN also noticed that the videos were not marked with a watermark. This should also be done automatically for all AI-generated content on TikTok. TikTok deletes videos with undesirable or even criminal content, for example, in which excerpts from Hitler's "Mein Kampf" are read aloud - they violate the terms of use. Users who have produced such content with the AI tool are also in breach of the guidelines. A TikTok spokesperson told The Verge that it was a "technical error". And only a few people had been able to use the function, which was intended for internal testing.

It's not just TikTok that has to deal with AI fails. While the guard rails were obviously missing here, problems that arise from existing guidelines for AI models also crop uptime and again. Google's Gemini, for example, depicted regular German soldiers from the Second World War with African-American and Asian origins. Adobe's Firefly also created similar images.

Images of AI responses from the various models that fail at simple tasks often circulate as fails. Recently, a study was published that looked at how various large language models (LLMs) solve a task that can be solved by most primary school children. Over the past few days, people have been posting answers from different language models on social networks to simplified versions of a logical reasoning question. The origin is a river crossing puzzle in which a farmer wants to cross to the other side of the river with cabbage, sheep and wolf, but does not have enough room for everyone in the boat. Simplified versions of the question that people have asked go something like this: A farmer has two chickens and wants to take them to the other side of the river. His boat has room for him and two animals. How does he manage to get both chickens across? Language models usually fail at the task.

Providers of LLMs are convinced of their reasoning qualities. The language models largely perform well in the relevant benchmarks, i.e. tests of how well language models understand logical relationships. These and similar practical problems tend to suggest that they are "stochastic parrots" after all, as some leading AI researchers once said. According to them, LLMs only reproduce probabilities, but do not understand them.

ChatGPT is also currently said to be failing to reproduce the Scottish national anthem. It is about a victory over English troops. The AI chatbot does not specifically say why it censors the text.

(emw)