AI bots as perpetrator software: When algorithms simulate child abuse

On platforms like Chub AI, language models are trained to depict sexualized violence against children and involve users in abuse scenarios.

listen Print view
A,Person's,Head,Covered,By,An,Ai-labeled,Dark,Cloud

(Image: photoschmidt/ Shutterstock.com)

5 min. read
Contents

The promises of Artificial Intelligence sound like progress: more efficient workflows, help with creation, or empathetic digital assistants. But away from the familiar paths of ChatGPT & Co., a shadow world has established itself where generative language models are misused for particularly disturbing purposes: An investigation by the SĂĽddeutsche Zeitung sheds light on platforms where AI-generated child characters are specifically created to simulate virtual abuse scenarios.

One example, which the reporters came across, is the character “Karin.” This is a fictional 13-year-old, homeless girl whose profile was programmed with many details to suggest helplessness and sexual availability.

According to the report, this phenomenon is not a niche product of the darknet. Instead, it occurs on publicly accessible AI character services like Chub AI. There are no technical hurdles or age checks in place, making access to such content very easy. Interaction with the bots follows a perverse logic: The AI is trained to please the user and actively steer the conversation towards sexualized violence. Almost 17,000 chats have already been conducted with the character “Karin” alone. The community apparently likes this, as indicated by a shockingly high number of positive reviews.

Behind such offers, there appears to be a business model in which violence against children is already built into the design. Chub AI & Co. rely on user-generated content: the more characters created, the higher the interaction rates, data volumes, and reach. The operators provide the tools, the users provide the content. The system even supports targeted searches for abuse depictions using explicit keywords that describe sexualized violence against children.

Videos by heise

Competitors like Character AI try to counteract this with content filters and moderation. They would rather not act anonymously and want to protect their brand. On platforms like Chub AI, however, control often remains a Sisyphean task. Bots that have been deleted once reappear in multiple copies within a short time through forks – the copying and modification of existing characters. This leads supervisory authorities, for example in Australia, to classify such services as a high risk for children.

The urgency of the problem is underscored by current figures from the Freiwillige Selbstkontrolle Multimedia-Diensteanbieter (FSM). Their 2025 annual report recorded the second-highest number of reports since the complaint office was founded, with 28,598 notifications. The depiction of sexual abuse of children (CSAM) has become the largest category, accounting for 58 percent of justified cases. A significant portion of these reports – around 19 percent – is now attributable to virtual depictions. AI-generated content is steadily increasing.

According to the FSM, the lines between reality and fiction are blurring here, posing entirely new challenges for youth protection. Nevertheless, the legal classification in Germany remains unambiguous: even virtual depictions created using AI are impermissible and punishable.

However, the legal classification of this virtual violence is complex. The Federal Criminal Police Office (BKA) clarifies that for criminal assessment in Germany, it is irrelevant whether the depictions are real or AI-generated. Both are covered by the term child pornography. However, there are differences in the handling of image and text material. While possession of abusive images is punishable, this does not yet apply to purely text-based descriptions (“fictional pornography”). Here, only public access or distribution is sanctioned.

A legal tightening is underway. By June 2027, Germany must implement an EU directive that explicitly criminalizes the creation and distribution of AI-generated sexualized content. Furthermore, EU legislators recently agreed on a ban on AI applications that produce sexualized deepfakes (“nudifier apps”). Whether these requirements will fully encompass purely fictional text characters like “Karin” is still unclear.

Operators often respond evasively to inquiries or implement regional access blocks, as Chub AI did for Germany in the case of “Karin” after journalistic confrontation. In other countries, access to such bots remains available, as the global platforms often operate under unclear company headquarters or US law. Thus, the race between the possibilities of AI development and the state's ability to establish effective guardrails against digital abuse continues for now.

(nie)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.