California Governor Gavin Newsom has vetoed a bill that would have mandated the implementation of the nation’s first complete statewide regulations on artificial intelligence — dealing a major blow to those seeking tighter oversight of AI.
The legislation, called S.B. 1047, was an effort to set up safety standards for big AI models, however met stiff resistance from Silicon Valley companies, startups and even some Democratic lawmakers.
But he said Newsom raised concerns that the bill could harm California’s tech industry. At Salesforce’s Dreamforce conference earlier this month, he remarked that California had to lead on regulating AI because the federal government was doing nothing on it, but also said that such proposals “could have a chilling effect” on the industry.
“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement.
“Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Newsom continued.
Rather than signing the bill, Newsom noted that the state will work with specific experts to put forward safety regulations for large AI models. He feels that this approach will provide more public safety without impeding the growth of California’s formidable tech industry.
Democratic state Senator Scott Wiener, the bill’s author, expressed disappointment with the veto. The veto “clearly is a major setback for anyone who believes in accountability of the largest corporations making critical decisions that directly impact our public safety, our public health and well-being and the future of life on Earth,” Wiener said.
However, “voluntary commitments from industry are not enforceable and rarely work out well for the public,” he added. But Wiener says he continues to work on a bill aimed at improving AI safety, and argues that the conversation around this bill has brought extra attention to the issue.
The bills would have forced AI developers to test their models, reveal safety measures and protected those working inside the sector as whistleblowers. It is equally aimed at addressing fears of a potential AI being used to wreak havoc — either as an attack vector on critical infrastructure or as an arms race with the development of harmful new technology.
Advocates — including tech billionaire Elon Musk — said that those rules would provide much-needed transparency and oversight. Some, though, criticized the bill as an innovation killer that would scare away firms considering setting up shop in Nevada with AI. For example, US Representative Nancy Pelosi spoke out against the bill, claiming that it could dampen open-source software development and AI model sharing.
Even though this bill failed, other AI legislation is still on the table in California. Thus far, lawmakers have discussed using AI in employment practices and the use of AI in creating deep fakes. Newsom also recently signed a pair of bills aimed at ensuring Hollywood workers are protected from unauthorized use of AI and has enacted some of the nation’s most stringent laws governing election-related deepfakes.
But Newsom is still working to make sure California is seen as a leader in the field of AI. He noted that 32 of the world’s 50 leading AI companies are here and said the state plans on deploying more AI to tackle state issues like traffic control and helping address homelessness.
This means the discussion about AI safety and regulation has no end in this fast-moving scene.