Godmother of AI worries about Californian AI law

Dangerous for AI, for open source and for society. Stanford professor warns against SB-1047, the American AI Act.

Save to Pocket listen Print view
A scale with the letters AI and a drawing of a human brain on it.

(Image: Shutterstock/Sansoen Saengsakaorat)

3 min. read
This article was originally published in German and has been automatically translated.

A law on the regulation of AI is currently being drafted in California. It is similar in parts to the European AI Act. However, there are differences in terms of responsibilities and the obligation to perform a kill switch. According to Stanford professor Fei-Fei Li, both of these could massively hinder the development of AI in California and the USA. She is also concerned that the law does not address the problems that actually exist, such as how society wants to deal with AI.

Li explains her concernsin a guest article in Fortune magazine. There, the 'Godmother of AI' writes that SB-1047, as the law is abbreviated, will stifle developers and prevent innovation, as the providers of AI models will also be held responsible in the event of misuse. According to Li, it is impossible to rule out all conceivable scenarios for misuse as a developer.

Her second concern relates to the obligation to include a kill switch in models that exceed a certain size. According to the law, the kill switch is intended to ensure safety, allowing systems to be completely switched off. According to Li, this possibility will ensure that developers become more hesitant and no longer contribute equally if they know that the programs they are working on can be deleted. This affects the open source community in particular.

And because the open source community would then be less active, there would also be less information available to the scientific community. And without access, it would not be possible to train the "next generation of AI leaders".

In addition to the assumptions about what is threatened by regulation, Li also writes about what she believes is missing: "This law does not address the potential dangers of AI, such as bias and deepfakes." Li offers her expertise to the Senator responsible, Scott Wiener.

The California Senate Bill bears the number 1047 and the long title "Safe and Secure Innovation for Fontier Artificial Intelligence Models Act". Li is not the first critic of the bill; several people from the AI industry have already expressed their concerns. Li is now very clear in her comments. Similar to the AI regulation, it has mostly been accusations that regulation inhibits innovation. In the eyes of some people, however, this is probably true in principle. Compliance costs have also been criticized. The law, again similar to the European version, provides for obligations that include, for example, a risk assessment and reports on security incidents.

Critics are concerned that all of this could slow down developments in the AI sector.

(emw)