Analysis: The GEMA ruling shakes the AI industry's self-understanding
A court ruling in Munich could have far-reaching consequences for the AI industry. The actual dilemma appears almost unsolvable, analyzes Malte Kirchner.
(Image: Tada Images / Shutterstock.com; Bearbeitung: heise medien)
Anyone who explains to a layperson how AI models work often encounters incredulous astonishment: that the trick behind AI is simply statistics, that it has no soul, no sense or understanding, is incomprehensible to many. How can this thing then speak to you in coherent sentences? Or even say meaningful, even inspiring things?
Above all, there is a question that sounds quite uncomfortable for humans as a species: are we perhaps so predictable in large parts that a computer can simply replicate our language, our knowledge, and our works in a matter of seconds?
No Coincidence
The Munich Regional Court has now denied statistical coincidence in a ruling. At least in the case of nine well-known songs, it came to the conclusion that the AI could not have coincidentally arrived at the almost identical texts in the same way as the artists. This is what OpenAI, as the publisher of ChatGPT and the defendant, had claimed. The texts must apparently be "memorized," says the court, following the German collecting society GEMA as the plaintiff.
Videos by heise
One should refrain from overly dramatic consequence assessments for the moment: the ruling is not legally binding, and it is one of several proceedings currently being initiated against AI providers in various parts of the world. We will likely see landmark rulings in a few years, perhaps also new legislation. But one thing is certain: there will be no triumphant march through the courts for the AI industry as winners.
Ruling Shakes the Industry's Self-Understanding
And this intermediate step is indeed significant for the AI industry. The Munich ruling shakes its self-understanding that AI is fundamentally something good for humanity. Something that the public – represented here by the courts – cannot object to because it advances societies. This idea is deeply ingrained in OpenAI: Created as a non-profit organization, OpenAI set out to develop artificial intelligence for humanity.
With this in mind, humanity's knowledge was absorbed without hesitation to make it available to everyone. Paying for it did not initially occur to the creators. Especially since AI still costs immensely more money for training and computing power than it generates. It is only through large amounts of venture capital and investments from the IT industry that it has become viable and what it is today.
Copyrights Again and Again
But as in previous digital frenzies triggered by new technology, it is copyright that has brought the digital high-flyers down to earth. Time and again in recent years and decades, there have been cases where technology has made what other people create more freely accessible. This digital liberation from previously limited analog possibilities is undoubtedly progress. But in the end, people who create something want and must continue to live from it. The joy over the new possibilities and the wider reach are not sufficient for them. Since AI providers now create something with their works, it seems legitimate at first glance that they want compensation from them. However, it is equally legitimate to ask what rulings like the Munich one do to AI.
Some experts are already suspecting that the deletion of entire models could be on the table if AI providers fail to somehow filter out or cut out the criticized knowledge. However, it doesn't have to come to this worst-case scenario. It might be enough for AI to present itself much more verbosely, much more restrictively, and with less successful results in the future, thus calling into question the purpose and meaning of an entire industry.
Then simply license the content, one might want to tell the industry. This is already happening to some extent today. But in an industry that is burning massive amounts of money in the vague hope of being the first to invent Artificial General Intelligence, i.e., true AI. At some point, this is more of a delaying tactic than a sustainable solution.
Time for Fundamental Future Thoughts
Of course, it could also be that Europe once again takes a special path. If the rulings here are made differently than in the rest of the world, they could lead to technical isolation. However, if US courts and those in countries outside the EU do not rule fundamentally differently, OpenAI and co. will have to do some fundamental thinking about the future.
Technical solutions only offer partial relief at first glance: there is, for example, the "Differential Privacy", a technique that ensures that individual training data can no longer be extracted from the model. This works in principle, but often leads to poorer model results. With "Constitutional AI," a model can regulate itself. Here too, ifs and buts are literally programmed. And "Machine Unlearning" would be a way to delete copyrighted material. But if it doesn't stop at nine songs by Herbert Grönemeyer and Helene Fischer, the question will eventually arise: at what price?
Way Out Uncertain
For the moment, the copyright holders, represented by GEMA, leave the courtroom with the good feeling that their concerns play a role in the AI world at all. And AI providers like OpenAI, Google, Anthropic, xAI, and DeepSeek are reminded that important fundamental questions cannot simply be resolved by disruptive, head-on approaches.
In the end, there is a fundamental dilemma between moral rightness (copyright holders) and impending economic overload (AI industry). And tellingly, the AI we consulted can offer no way out of how both can be reconciled. That cannot be a coincidence.
(mki)