In a move that has reignited the debate over regulating artificial intelligence, Governor Gavin Newsom vetoed Senate Bill 1047, an ambitious attempt to establish protocols aimed at managing the risks associated with advanced AI models. The bill, introduced by Senator Scott Wiener, would have required developers of these AI systems to submit safety plans to the state attorney general and maintain the ability to disable the AI if necessary. The intention was clear: protect the public from the unpredictable dangers that unchecked AI innovation could potentially unleash
However, Newsom, in his veto message, voiced concerns that the legislation did not effectively address the varying scales of risk posed by different AI models. He warned that passing such a bill could create a “false sense of security” while stifling innovation in an industry rapidly evolving in ways that the bill could not fully account for. “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom stated.
His veto has brought a crucial question to the forefront: how can California lead in AI regulation while balancing safety concerns and innovation?
Supporters Push for Guardrails, Opponents Fear Overreach
The AI safety bill passed through the California Legislature with strong backing from a wide range of supporters, including the Center for AI Safety, SpaceX CEO Elon Musk, and AI researchers. Many argued that the risks associated with rapidly advancing AI technology are too great to ignore. Proponents cited potential future harms such as job displacement, privacy breaches, and even threats to public safety as reasons to put guardrails in place now.
For supporters, Newsom’s veto represents a missed opportunity for California to assert itself as a global leader in AI regulation. Senator Wiener expressed his disappointment, saying, “This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.”
On the other hand, the bill faced stiff opposition from some of the most prominent players in Silicon Valley, including Meta and OpenAI, the developer of ChatGPT. These companies, along with certain Democratic lawmakers, argued that the bill would impose burdensome requirements that could hinder innovation. They warned that the legislative overreach could stifle the very creativity that has made California a global hub for AI development.
Governor Newsom Vetoed AI Bill: Call for Comprehensive Regulation
Rather than pass SB 1047, Newsom has indicated that a more nuanced approach is necessary. During a recent fireside chat at Dreamforce, San Francisco’s major tech conference, Newsom acknowledged the challenges of regulating such a fast-moving technology. “We’ve been working over the course of the last couple years to come up with some rational regulation that supports risk-taking, but not recklessness,” he said. His administration has begun assembling a group of AI leaders to help craft workable protections tailored to different types of AI systems, from those used in low-risk environments to models that handle sensitive data and decision-making.
By vetoing the bill, Newsom seems to be advocating for a more comprehensive and flexible strategy to regulate AI. His vision suggests that California should aim to address the real threats posed by AI without stifling the potential for beneficial innovation. However, achieving this balance will require the input of both industry experts and lawmakers in future legislative sessions.
Governor Newsom, in closing his veto message, left the door open for future collaboration: “I look forward to working with the Legislature next session to develop a framework that truly protects the public while fostering technological progress.”
For now, the tech industry will proceed without the constraints of SB 1047, but the debate over AI’s future in California—and indeed the world—has only just begun.
*We have included the information on this site in good faith, solely for general informational purposes. We do not intend it to serve as advice that you should rely on. We make no representation, warranty, or guarantee, whether express or implied, regarding its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.