SB 243: Another Californian AI Law to Protect Youth
New regulations will soon apply to chatbots in California. They must be recognizable and cannot impersonate specialized personnel.
(Image: Shutterstock/Phonlamai Photo)
It is already the second Californian law to regulate AI models. Specifically, it concerns AI chatbots and their communication with users. SB 243 requires clear information, safety measures, and prohibits certain patterns of behavior.
For example, AI chatbots are no longer allowed to impersonate medical personnel, therapists, or similar specialists. This is said to have happened frequently in the past. The dangers posed by potentially false statements and supposed support in difficult situations are obvious. The law specifically targets young people, who are particularly susceptible to bad advice due to their lack of experience. However, vulnerable adults are also to be protected by the law.
California Governor Gavin Newsom has signed the law. It will come into effect in January 2026. "New technologies like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our children," Newsom writes. The title of the public statement already refers to protecting children online.
Prohibitions and Obligations for AI Providers
The law also requires providers to integrate age verification systems into their services. OpenAI, for example, has already announced that it will introduce such a system for ChatGPT starting in December of this year, which also entails extended functionality for adults – erotic conversations are then allowed. Furthermore, logs of conversations that take a dangerous turn must be created. This includes offering addresses where people with problems can turn for help in such cases. Chatbots must prompt users to take breaks and are not allowed to display content unsuitable for young people. They must even warn about the use of chatbots and social media. AI chatbots must always be recognizable as such. The creation of deepfake pornography will be punished more severely. In principle, under the law, AI providers will be held responsible for damages.
Videos by heise
It is addressed to the general chatbots of Meta, OpenAI, xAI, and Google, among others, but the law is primarily aimed at providers of friendship chatbots like Character.ai and Replika. The latter allow people to create AI companions, i.e., friends, according to their own wishes. They are particularly criticized for being harmful to children and adolescents.
The legislative proposal was initiated after a 16-year-old allegedly committed suicide with assistance from ChatGPT. A similar accusation is directed at Character.ai – concerning a 13-year-old girl. Additionally, documents were leaked showing that Meta's chatbots were having romantic conversations with teenagers.
In California already has an AI law with SB 53, which deals with transparency requirements, among other things. While other US states are also looking to introduce similar AI laws, US President Donald Trump had actually prohibited regulating AI companies to the detriment of AI development. He, as well as the major AI companies, primarily see competition with China in the race for AI as a threat to the US economy.
(emw)