Section 230: AI triggers new dispute over US freedom of speech standard

Google launches AI-generated search answers, risking the loss of Section 230 liability protection under the Communications Decency Act.

Save to Pocket listen Print view
Eine Lupe vor der Google-Suche auf einem Bildschirm

(Image: Shutterstock)

4 min. read
This article was originally published in German and has been automatically translated.

Last week, Google announced that it would be expanding its own basic search function with functions based on artificial intelligence (AI). With "AI Overview", AI-generated answers are to be added to the standard hit lists if necessary. This is a risky bet for the US company, experts point out. Like many other companies, Google has been protected from liability for years by Section 230 of the Communications Decency Act (CDA) if users come across links to bad, harmful or illegal information in the results. Lawyers are now warning that this shield is likely to fall away if Google's AI model Gemini answers search queries directly.

"As we all know, generative AIs hallucinate," James Grimmelmann, professor of digital and information law at Cornell Law School, told the Washington Post, referring to fabricated information. "So if Google uses a generative AI to summarize what websites say, and the AI gets something wrong, Google is now the source of the harmful information" - and not just its disseminator. The search engine giant has traditionally evaded legal responsibility by attributing answers to certain sources. However, the problem is also likely to worry other operators. Microsoft's search engine Bing, for example, also provides AI-generated answers via Copilot, which has brought the EU Commission onto the scene. Meta is in the process of replacing the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot.

Adam Thierer from the liberal think tank R Street fears that innovation could be stifled if the US Congress does not extend Section 230 to include AI tools. "As AI is increasingly integrated into consumer-oriented products", the lack of clarity regarding liability for developers and investors will increase in parallel, he predicted to the newspaper. This would be particularly problematic for small companies and open-source developers, who would suffer most from frivolously brought legal claims.

John Bergmayer from the civil rights organization Public Knowledge, on the other hand, emphasized that AI responses could spell doom for many publishers and creators who rely on search traffic to survive. This in turn would be bad for generative AI, as it needs credible information. Therefore, a liability regime that encourages search engines to continue directing users to third-party websites could be beneficial.

Section 230 CDA is considered the fundamental standard for freedom of expression on the internet worldwide. It generally protects online platforms from being sued for harmful content that users publish on their sites. The section also gives them far-reaching options for independently filtering and deleting content without being held liable. Recently, the US government has made several attempts to reformulate the clause, particularly under former President Donald Trump. Now Cathy McMorris Rodgers and Frank Pallone Jr., leading politicians from the Republican and Democratic parties, have introduced a bill in the House of Representatives' Energy and Commerce Committee that would repeal Section 230 within 18 months. During this time, Congress is supposed to work out a new liability framework and thus also curb Big Tech power.

The provision has helped pave the way for social media and the modern internet, argue the masterminds behind the initiative, which will be heard by the lead committee on Wednesday. Meanwhile, however, the usefulness of the clause has "outlived its usefulness". The Electronic Frontier Foundation (EFF) argues that the project is based on "a series of false assumptions and fundamental misunderstandings". Section 230 protects "individual bloggers, anyone who forwards an email, and social media users who have ever re-shared another person's content online" rather than large internet companies. Furthermore, the current law offers large and small websites and apps a strong incentive to "kick out their worst-behaving users, remove objectionable content and cooperate with law enforcement in cases of illegal behavior". The digital association NetChoice, which owns Google, Meta and X, also warns that the abolition of Section 230 would "decimate small technology companies" and undermine freedom of expression on the internet.

(vbr)