China: LLMs must out themselves

If an LLM can appear human, it will soon have to comply with "interim measures" in China: transparency, safety, and social morality.

listen Print view
Flag of the People's Republic of China

(Image: rongyiquan/Shutterstock.com)

5 min. read
Contents

The People's Republic of China is reining in large language models. Because it needs to happen quickly, "interim measures" are now open for public consultation until January 25, 2026. The general sentiment is that Artificial Intelligence must identify itself as such and pay attention to data protection, user well-being, and, of course, "socialist core values."

The goal is to support the "healthy development" of AI services and their "application ecosystem," specifically for the dissemination of Chinese culture and the companionship of seniors. The latter is apparently intended to combat loneliness. Socialist core values must always be adhered to, and "social morality and ethics" must also be observed. Operation is only permitted once safety and reliability have been fully proven.

The new measures are subordinate to any existing laws and regulations and cover "anthropomorphic interactive services." These are all publicly offered AI systems that communicate with humans in China through text, images, videos, and/or audio, simulating human personality patterns, thought patterns, or communication styles. The People's Republic will secretly monitor such services to prevent "misuse and loss of control." Additionally, the industry is expected to self-regulate.

Article 7 of the consulted draft contains a long list of prohibitions. The generation, dissemination, or promotion of content related to gambling, obscenity, violence, insult, defamation, incitement to criminal offenses, or the violation of legitimate rights or interests (!) of third parties is prohibited. Similarly, nothing may be generated or disseminated that endangers national interests, national honor, or national security, undermines China's unity, involves illegal religious practices, or spreads rumors that disrupt the economic or social order.

The systems must also not make false promises that strongly influence user behavior or harm their social relationships. Furthermore, harming users' physical health through implication, incitement to, or glorification of self-harm or suicide is frowned upon. In parallel, mental health and personal dignity are to be protected by prohibiting emotional manipulation and verbal abuse. This aligns with the interdiction of designing digital offerings with goals such as addiction, social isolation, or psychological control over users.

Videos by heise

Finally, the authority also prohibits the collection of confidential information and the inducement of users to make "unreasonable decisions" through methods such as algorithmic manipulation, misleading information, or setting emotional traps. Mimicking family members is also taboo.

Articles 8, 9, and 21 onwards contain extensive requirements for the design, development, operation, and shutdown of AI services. The scope ranges from data protection and fraud prevention to ethics and IT security. Once in operation, performance is to be "continuously optimized."

Specifications for the data used for AI training are in Article 10 of the draft: they must also comply with socialist values and embody the "excellent traditional Chinese culture." At the same time, the corpus should be diverse and checked daily. The requirement that training data must be legal and traceable is not forgotten.

According to Articles 11 to 13, providers are to evaluate their users to determine if they are seniors or minors, as well as if they are dependent on the service or have emotional difficulties. Appropriate measures must then follow. Minors may only see a mode with restrictions, including time limits, special data protection, and access for guardians to be named during registration. Those wrongly classified as minors will have a right of appeal.

Seniors are also required during registration to name an emergency contact person; additionally, operators must provide social and psychological assistance in case of need. For all users, operators must prepare pre-defined information that will be displayed if a danger to the user's life, health, or property is detected. If a user specifically suggests suicide, self-harm, or "other extreme acts," a human must immediately take over communication and inform the user's emergency contacts.

Usage data may not be shared or used for AI training without the user's consent. Users must also be able to delete their usage data.

Operators must make it clear that users are interacting with an AI and not a real person. Those identified as "highly dependent" must be reminded of this. After two hours of uninterrupted session, users are to be encouraged to take a break.

Providers of "emotional companions," such as AI girlfriends, must make special efforts. They must offer simple exit options and may not try to dissuade users from ending a session. If a partial function is removed, the operator must announce it in advance. Public information is also required in the event of outages.

Finally, all systems covered by the regulation must have convenient channels for submitting tips and complaints, for which processing statistics must be published. Submitters must be promptly informed of the outcome of the processing of their input.

(ds)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.