Tiktok's algorithm could trigger liability for dangerous videos

TikTok recommended a dangerous "challenge" to a 10-year-old, leading to her death. A US court ruled liability protections don’t shield TikTok.

Save to Pocket listen Print view
Six people, each with a smartphone, stand in a circle so that the smartphones approach the center of the circle

(Image: Shutterstock.com/ View Apart)

11 min. read
Contents

Tiktok can in principle be held liable for decisions made by its algorithms. This is according to the US Court of Appeals for the Third Circuit, referring to a censorship decision by the US Supreme Court on July 1. The tenor: If an algorithm compiles third-party content in such a way that the compilation becomes an independent statement, this statement is attributable to the operator of the algorithm, even if the content itself does not originate from him. Another US federal appeals court (9th Circuit) has found another way to hold operators of online services liable for third-party content.

Update

Court documents available for download at the end of the article

The reason for the case in the Third Circuit is0 a sad one: the death of a ten-year-old child. Although Tiktok stipulates a minimum age of thirteen, the child used the Chinese video app. Its algorithm recommended a video on the "For You Page" that contained a life-threatening challenge ("Blackout Challenge"): Users should please film themselves strangling themselves until they lose consciousness. Unfortunately, the child followed the challenge and did not survive. Now the child's estate and mother want to sue Tiktok and its parent company Bytedance in a US federal district court. The District Court rejected this, but the Federal Court of Appeal interpreted the law differently and sent the case back to the first instance.

The sticking point is once again the famous Section 230, part of the US federal Telecommunications Act of 1996, which grants immunity for content that operators of interactive communications services do not provide themselves, but which is posted by third parties (with exceptions that are not relevant here). The textbook example is a web host that should not be held accountable for any stupidity that its customers post on their own hosted websites.

Section 230(c)(1)​

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

There is no compulsion to distribute third-party content under US federal law. This problem was even the trigger for Section 230: a forum operator generally removed postings that were not suitable for minors; a judge saw this as the basis for making the operator liable for all postings that were not deleted. Legislators responded with Section 230 in order to keep hosting services available and affordable and not force them to play censorship police.

However, the boundary between the dissemination of third-party content and statements attributable to the service operator itself is unclear. Of course, you have to take responsibility for your own statements. The US states of Texas and Florida want to force large online services by law to distribute content that they do not want to distribute. Deleting postings would be just as illegal as reducing their distribution. Operators would even be prohibited from taking measures to protect children on their own initiative. Rewarding or favoring certain postings would also be prohibited.

In the view of the US Supreme Court, these state laws against censorship are likely to constitute censorship themselves. Accordingly, operators of online services have the right to decide what they display and how, even if the posts themselves originate from third parties. This is because these selection decisions are in themselves an expression of opinion, even if only very few posts are blocked. The operator then expresses which content it rejects, the Supreme Court explained on July 1. And the First Amendment of the US Constitution enshrines the right to express opinions, which state laws are not allowed to interfere with.

The US Federal Court of Appeal for the Ninth Circuit has now referred to this: if an operator uses algorithms that themselves make statements ("expressive algorithms"), the operator must be liable for these decisions. Section 230 only protects against liability for third-party statements. The situation is different for algorithms that make selection decisions based on user input or previous user behavior; a classic example is search functions where the user enters search terms of their own choosing. In the court's view, Section 230 does provide protection for the resulting expenditure.

Since the plaintiff claims that Tiktok's algorithms are of the former type, the federal district court may not dismiss the case, says the federal appeals court. So it sends the case back, and the district court must determine whether the suggestion of the dangerous video on the For You page was the result of the child's prior input or a judgmental algorithm for which Tiktok could be liable. Only then can the district court decide whether Section 230 actually bars the suit. The Federal Court of Appeal concedes that many other US courts have interpreted Section 230 much more broadly, in favor of liability protection for online services.

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.