Newsom’s Folly: lessons from the veto of SB 1047
Gavin Newsom failed to protect us from AI catastrophe. Where do we go from here?
Gavin Newsom may not realize it, but he has just dealt a blow to the future of the human race.
Gavin Newsom in 2023, back when it briefly seemed like he might not cave to AI lobbyists. Source: Bloomberg.
On Sunday, September 29, the California governor vetoed SB 1047, a bill that represented the US’ best hopes for near-term regulation of dangerous Artificial Intelligence systems. SB 1047 would have required AI companies to test their largest models for dangerous capabilities, set basic provisions to prevent catastrophic harm from these models, protected whistleblowers, and established a state board in the heart of Silicon Valley to oversee frontier AI model development. In a sane world, these measures would be uncontroversial: if you are building something that could kill people, you should need to meet some basic safeguards.
If SB 1047 had become law, it would have established that AI regulation in the US is possible, and that we do not need to race ahead without guardrails. It would have been a first step, paving the way for future, more ambitious legislation.
The bill was supported by leading AI safety experts — including Yoshua Bengio and Geoffrey Hinton, two of the “godfathers of AI”— as well as OpenAI whistleblowers and an overwhelming majority of Californians. Scott Weiner, SB 1047’s lead sponsor, was extremely receptive to critical feedback on the bill, which went through several rounds of revisions addressing critics’ concerns. Hopes were high that Newsom might have the courage to do the right thing, but he failed.
Despite the bill’s light-touch nature, and multiple rounds of edits to placate concerns, the pro-AI lobby came out in full force against SB 1047. As PauseAI noted in a previous newsletter:
Two of the loudest critics of this bill have been Y Combinator and Andreessen Horowitz, both influential firms with deep ties to the AI industry. Marc Andreessen (of Andreessen Horowitz) sits on the board of Facebook, and Y Combinator has invested in OpenAI.
These firms have spread brazen misinformation about SB-1047 — including that the bill would send model developers to jail for failing to anticipate misuse (unambiguously false) and that the bill will stifle innovation and restrict startups (also false, as the bill’s provisions only apply to training runs above $100 million). In a response letter, Sen. Scott Wiener, SB-1047’s lead sponsor, refuted each of these claims.
The few AI scientists who oppose the bill— such as Meta Chief Scientist Yann LeCun or “godmother of AI” Fei-Fei Li — often have financial incentives to do so. Meta is one of the few AI companies large enough to be affected by the bill’s provisions, and Li’s billion-dollar startup received investment backing from Andressen Horowitz.
Some of these AI lobbyists have deep ties to Newsom’s office, as tech journalist Shakeel Hashim points out:
Andreessen Horowitz, despite its far-right leanings, hired Newsom confidant Jason Kinney as a lobbyist [...] Then there's Ron Conway, a Democratic mega-donor close to Pelosi and Newsom, who owns stakes in OpenAI, Anthropic, and Mistral. Conway reportedly lobbied hard to kill the bill, seemingly threatening to ruin Wiener’s career over it.
This time at least, big tech won.
We might be consoled if Newsom’s reasons for vetoing the bill were on solid ground. Unfortunately, they are not. One of Newsom’s excuses is that the bill only covers the largest models:
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047”
This is transparently nonsensical. It is the equivalent of saying “This bill only bans the largest bombs. Small bombs might end up being just as dangerous in the future. So let’s not ban large bombs.” If Newsom were genuinely worried about emergent risks from small future AI models not covered by SB 1047, he would have signed SB 1047 as a first step, and then lobbied for future legislative efforts to regulate smaller-scale specialized models.
The true reason for Newsom’s veto seems to be an unwillingness to upset his big tech allies. His justifications are so shaky that they seem to have been spun together as a post hoc justification for a politically unpopular move.
But wait! Newsom claims to be supportive of future AI regulation and safety protocols, so long as they are “informed by an empirical trajectory analysis of AI systems and capabilities.”
Well then, let’s take a look at such an “empirical trajectory analysis,” shall we?
OpenAI’s newest model already scores “medium” in certain categories of risk, including chemical, biological, radiological, and nuclear (CBRN) risk.
We already have evidence of power-seeking behavior from advanced AI (which safety experts have been warning us about for years).
Several of the world’s leading experts believe that catastrophically dangerous AI could be only a few years away.
Yoshua Bengio, “godfather of AI” and recipient of the prestigious Turing Award, states that loss of control to rogue AI systems could occur in as little as a few years unless appropriate precautions are taken.
A recent survey of over 2,700 top AI researchers found a 10% estimate of human-level AI by 2027.
A recent UN report found that several experts “expect the deployment of agentic systems in 2025” to lead to “some of the most surprising or significant impacts on AI-related risks.”
Surveys of thousands of AI experts have found a mean 1 in 6 estimate that superhuman AI could lead to human extinction. The head of AI Safety at the US AI Safety Institute (which Newsom references favorably in his veto statement) once gave a 20% estimate of human extinction from AI.
What more empirical evidence do we need to act? Are we going to wait until AI systems actually start killing people before we do something?
In a very real sense, Newsom’s veto reflects the insanity of the AI landscape. Some of the brightest minds on Earth are actively trying to build something more intelligent than human beings, which we don’t know how to control, and which thousands of experts believe could cause the extinction of humanity. And yet, AI lobbyists and the politicians in their pocket oppose even the lightest-touch guardrails at every turn. If we are lucky enough to have descendants, they will be embarrassed and appalled at how impotently our leaders reacted to this threat.
But the fight is far from over, and hope is not lost. There are lessons we can learn from Newsom’s veto:
Lesson 1: We need international coordination.
Newsom’s veto, in addition to being driven by powerful lobby interests, may have also been influenced by a desire to keep the lead on AI at all costs. He notes in his veto statement that California is home to 32 of the world’s 50 leading AI companies, and he has previously expressed concern about California losing its innovative edge in AI. There is a fear that if California over-regulates AI, America will lose its lead to other parts of the world – and this concern is mirrored on a national scale, with many politicians committed to maintaining America’s dominance in the field.
International coordination offers us a chance to escape this madness. As PauseAI has said all along, we cannot expect any individual company or country to slow down voluntarily. We need binding international agreements to stop the frenzy of the AI arms race. The optimal strategy is to cooperate, and to make sure that everyone else cooperates too.
Lesson 2: We need overwhelming grassroots action.
The fact that even a light-touch bill like SB 1047 failed tells us everything we need to know: we can’t trust politicians like Gavin Newsom to do the right thing on their own. The AI industry is too powerful, and their pockets are too deep. We must instead rely on tactics used for generations by our predecessors, who organized for climate policy, nuclear disarmament, and a host of other issues. We can engage in widespread grassroots activism, organize massive nonviolent protests, and apply unprecedented public pressure. We must become too big to ignore.
We are still in the early days. In a few years, as AI systems become more powerful, AI regulation will emerge as a dominant issue. The tech lobbyists will become louder and more obstinate in their anti-regulatory frenzy, so the calls for action from activist movements like PauseAI must become deafening. We are only getting started.
If we mobilize, humanity will win.
It is tragic that so misinformation and greed won over the better future and educated awareness for sb-1047.
Perhaps the most galling thing about this whole affair (as in so very many similar affairs) is that one person, ONE person, has such power over the lives and livelihoods of so MANY. And that one person is, in turn, the puppet of a few other individuals. Something needs to change.