Abuse images via AI: Minister of Justice fears criminal liability loophole
AI-generated depictions of child abuse are on the rise. The mere production of such images could remain unpunished, complains a state justice minister.
(Image: Pikul Noorod/Shutterstock.com)
The Internet Watch Foundation (IWF) has been warning for months about the flood of images of child sexual abuse that can now be easily created using artificial intelligence (AI) systems. In September alone, the British organization found over 11,000 such AI-generated images in a single darknet forum, of which it classified almost 3,000 as illegal. In Germany, the distribution, acquisition and possession of "child or youth pornographic images" is prohibited in principle under Sections 184b and c of the German Criminal Code (StGB) –, and to a greater extent than in many other countries. Nevertheless, local legal politicians have identified a potential gap in criminal liability.
So far, there has been hardly any case law on AI-generated depictions of abuse, explained Herbert Mertin (FDP), Minister of Justice of Rhineland-Palatinate, to SWR radio. This creates uncertainties: "One problem could be that the mere production, without the intention of distributing it, may remain unpunished" if AI is involved. If perpetrators produced such material with real children, this would clearly be punishable. The Conference of Justice Ministers has therefore asked Federal Justice Minister Marco Buschmann (FDP) to set up a commission of experts. This commission is to deal with the new technological developments and their legal implications.
Fictitious depictions of abuse relevant to criminal law
In principle, fictitious depictions of abuse and relevant texts are also criminally relevant in Germany. In addition to realistic drawings, this also includes modified representations of comic formats, mangas and hentais. In many other countries such as Japan, however, such virtual images are not even covered by the law, as the German government regularly explains in its reports on the implementation of the principle of "deletion instead of blocking". This also applies to so-called posed images. Nevertheless, the Federal Criminal Police Office (BKA) and internet complaints offices can often achieve deletion "by contacting service providers directly". The Federal Ministry of Justice assumes that "child and youth pornography regularly generated using AI" is also punishable. If illegal content is distributed via online platforms in Germany and abroad, the Digital Services Act (DSA) applies.
Videos by heise
According to SWR, thousands of artificially generated images of children and young people are distributed via Instagram under certain hashtags. Many are shown in skimpy underwear, swimwear or sexualized poses. Some accounts that share such images link to trading, crowdfunding or community platforms such as a Japanese website. The latter is also known to the BKA and, according to user comments, serves as a networking platform for people with paedophile disorders. Several users linked from there to other websites containing real images of child abuse. The BKA has not yet recorded cases of AI-generated pornography separately, but points to a general increase in depictions of abuse in 2022. Artificial images can hardly be distinguished from real ones. Real photos are also sometimes used as the basis for AI images.
Researchers recently discovered 1679 links in the Laion 5B dataset to images of abuse that they classified as illegal. Stablity AI, the developer of the AI image generator Stable Diffusion, works with the open source training set. The foundation behind Laion subsequently announced that the data sets had been removed from the web. Stablity AI claimed that the training had focused on a part of the data set designed for security. According to reports, however, an old version of the model is circulating on the Darknet, which can be used to easily generate abuse material. Senior public prosecutor Markus Hartmann from the Central and Contact Point for Cybercrime (ZAC) in North Rhine-Westphalia is concerned that investigators are falsely assessing AI-generated images as new actual abuse and are therefore reaching their limits in terms of resources. Sexual health experts warn that "artificial" images could also lead to distorted perceptions among paedophiles.
(nen)