- California Governor Gavin Newsom has voted for an AI bill SB 1047
- He sees it as over-regulation but not as a defense against “real threats”
- However, he did not dismiss the need for regulation
- Instead, he called for the “development of workable guardrails”
California Governor Gavin Newsom has voted for an AI bill, namely SB 1047, on September 30th.
He explained his decision by the fact that this option proposes excessive regulation, which can only slow down the whole development of the industry, but not protect the public from “real threats”.
However, he is not in favor of minimal regulation and believes that it is necessary, but should be more effective and profound.
Details on SB 1047
San Francisco Democratic Senator Scott Wiener penned SB 1047 earlier, it is also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
Its purpose was to bridge the gap between the rapidly developing AI technology capabilities and government policies that would regulate the area of responsibility of developer companies and their products.
However, we saw OpenAI releasing new versions of ChatGPT that had more capabilities beyond what the government regulation expected. We have also seen Meta, with its open-source Llama models, become a separate point of regulation regarding how AI models are distributed.
To limit their uncontrolled development and distribution, especially in the open source, Scott Wiener penned SB 1047, which imposed many policy requirements on companies such as ChatGPT OpenAI, Meta, and Google, as well as any other company training models costing more than $100 million.
At the same time, AI policy at Abundance Institute, Neil Chilson, warned that while the bill primarily targets models of a certain cost and size – models costing more than $100 million – its scope could easily be expanded to crack down on smaller developers as well.
Among key requirements, SB 1047 includes a requirement to implement a so-called “kill switch”, mandatory and accountable safety testing of models, and publish plans to mitigate extreme risks.
Tech Industry Reaction and the Gavin Newsom Veto
Expectedly, the tech industry was not thrilled with the increased demands on their operations and products, seeing ahead-of-time delays in developing and bringing products to market, as well as likely litigation and lots of additional overhead, which turned out to be true mostly.
Also, some politicians House Speaker Nancy Pelosi, and companies such as OpenAI said that “it would significantly hinder the growth of AI.”
But not everyone reacted unequivocally and expectedly, and Elon Musk, who is also developing his own AI Grok, supported SB 1047 by saying:
“California should probably pass the SB 1047 AI safety bill but conceded that standing behind the bill was a ‘tough call.”
– which generally coincides with his extremely cautious attitude toward technology and constant unequivocal warnings about its dangers.
However, Gavin Newsom left the final word on voting AI SB 1047, saying:
“Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Instead, Newsom proposes to develop a solution that doesn’t stop the entire industry and takes care of the real threats posed by the technology itself. He called on the:
“world’s leading AI safety experts to help California “develop workable guardrails” that focus on creating a “science-based trajectory analysis.”
Still, Newsom makes it clear that it does not devalue the huge potential of AI negatively, but points out that:
“adequate safety protocols for AI must be adopted, and regulators can’t afford to wait for a major catastrophe to occur before taking action to protect the public.”
Conclusion
Well, you can see how the second key technology of the 21st century after blockchain is also facing a lot of issues from regulators, and this can be taken as one indicator of its potential.
It’s unlikely that politicians will slow down business, and spend taxpayers’ money on it unless it deserves attention.
At the same time, finding a balanced approach should be a good strategy, because, for example, tight restrictions in the EU already lead to the fact that the development of local AI solutions is lagging behind.
This could be very dangerous, given that rival countries like China can afford to invest unlimited funds in AI development, and are already showing quite impressive results.
We’ll be watching closely, stay tuned.