China wants to strictly regulate anthropomorphic AI

AI systems that simulate human behavior are to be subject to stricter rules and issue warnings in the future.

listen Print view

(Image: MikeDotta / Shutterstock.com)

2 min. read

China's cybersecurity authority, the Cyberspace Administration of China (CAC), released a new draft for the regulation of AI systems for public discussion on December 27, 2025. According to the news agency Reuters, the rules concern AI systems aimed at end-users that mimic human behavior and lead to emotional interactions. Products and services that exhibit simulated human personality traits or thought patterns and have beyond one million registered users in total or more than 100,000 monthly active users are to be affected. It is not relevant whether communication takes place via text, images, voice, or other media.

The proposed regulation is intended to oblige providers to ensure transparency, traceability, data security, and the protection of personal user data throughout the entire product lifecycle.

In contrast to many other AI regulations, the proposed regulatory framework also emphasizes psychological risks. Providers are to make resources available that can detect users' moods and potential emotional dependencies and intervene if necessary. In extreme situations, such as when users threaten suicide or self-harm, human contact persons should be able to take over. Minors and the elderly will also be required to deposit emergency contacts before using the services.

According to the proposal, products, or services must also regularly and prominently inform users that they are interacting with an AI. If patterns indicating dependencies are found in user behavior, additional pop-ups must repeat these warnings. If usage time exceeds two hours, a mandatory break will be required.

Videos by heise

Article 10 of the policy proposal obliges providers to use datasets for AI training that comply with "the fundamental values of socialism" and China's traditional values. Providers are to ensure the traceability of training data permanently and ensure that the systems do not generate content that endangers national security or disrupts social order, for example.

(ulw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.