"Really bad for the AI industry": AI company loses in first US legal dispute

Dozens of court cases are pending in the USA because AI was trained with unlicensed material. Now there has been a setback for the AI industry.

listen Print view
The letters "A" and "I" on a digital scale, which is mounted on a

(Image: Alexander Supertramp/Shutterstock.com)

3 min. read

In the US, a media group has for the first time taken legal action against an AI company that used its content to train its algorithms. Should other courts agree with this assessment, it would be "really bad for the generative AI industry", Wired quotes an expert in IT law. The legal dispute now decided between Thomson Reuters and the long-since closed AI start-up Ross Intelligence was not about generative AI at all; it was filed back in 2020. As in many other court cases, however, it was based on the question of whether training an AI with unlicensed material falls under fair use. However, this has now been decisively denied.

The case concerns Westlaw, a subsidiary of Thomson Reuters. Among other things, it maintains databases with so-called headnotes or orientation sentences, which are short summaries of legal texts, such as court decisions. As a competitor, Ross Intelligence wanted to create an AI-based search engine for precisely these short texts, summarizes Judge Stephanos Bibas from Delaware. The start-up wanted to license Westlaw's content for the training, but the company refused. Ross Intelligence then hired another company, which ultimately created the training data using exactly the same content as Westlaw.

Videos by heise

When the AI company was taken to court in 2020, it defended its actions on the basis of the fair use doctrine. This allows the unauthorized use of copyrighted material under certain rules and is also used as a defence in other lawsuits against AI companies. In this specific case, however, this was not accepted, with the competent judge even revoking a preliminary ruling to the contrary. The main deciding factor was that a competing product had been built. The ruling literally states that "none of Ross's possible defenses are valid – I reject them". "Copying our content was not 'fair use'", Thomson Reuters is satisfied.

The decision does not bode well for the many pending court cases against AI companies. If other courts reject the reference to fair use in the same way, this would remove a crucial basis for training the models with immense amounts of content. IT law expert James Grimmelmann from Cornell University points out to Wired that Bibas has rejected most of the court decisions that AI companies have relied on to date as "irrelevant". Ross Intelligence itself had already been shut down in 2021 as a result of the legal dispute.

(mho)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.