Three Silicon Valley Congress members — Rep. Zoe Lofgren, Rep. Ro Khanna and Pelosi — also spoke out in an effort to kill the bill. Even Wiener’s longtime ally, San Francisco Mayor London Breed, wrote in an open letter that “more work needs to be done to bring together industry, government, and community stakeholders before moving forward with a legislative solution that doesn’t add unnecessary bureaucracy.”
Speaking on KQED’s Forum on Thursday, Pelosi criticized the bill again.
“California is the home, the birthplace of AI. In our view. It has the knowledge, the technological knowledge, it has the entrepreneurship, and it has the responsibility to do the right thing, not to pass a bill that does not do the job because it is as well-intentioned as it is ill-informed.”
She also pushed back against the idea that she was speaking out against Wiener’s bill because he might face Pelosi’s daughter in a run for her seat once she leaves office.
“They don’t know what they’re talking about,” she said about Politico, which first reported the story. “I don’t want California going down a bad path on something this serious and has nothing to do with elections.”
SB 1047 has also attracted high-profile supporters, including Elon Musk, who on Monday posted on social media site X, “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill. For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”
Wiener acknowledged his unlikely ally. “Elon Musk is not a fan of me, and I’m not a fan of Elon Musk,” Wiener said. “But even people who have very strong disagreements can still find common ground. And, in this area, Elon and I have common ground. He has long, long been an advocate for AI safety. And so this position is very consistent with his long history.”
Wiener said he amended the measure to reflect counsel from leaders in the AI space, including safety groups, academics, startups and developers like Amazon-backed Anthropic. The bill no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. Also, the original text, which would have established a division within the California Department of Technology “to ensure continuous oversight and enforcement,” is gone.
Wiener, who heads the Senate Budget Committee, told KQED the changes were made in large part to improve the bill’s chances with lawmakers and Gov. Gavin Newsom, given that California is facing a massive budget deficit. “My experience with Gov. Newsom is he gives bills a fair shake and he listens to arguments. He speaks to people who support and people who oppose, and he makes an informed choice. And I’m confident he will do that here.”
SB 1047 would only affect companies building AI systems that cost more than $100 million to train. However, critics argue the mere threat of legal action from the state attorney general will stifle innovation by discouraging big tech companies from sharing open-source software with smaller ones.
Last month, Anthropic warned in an open letter addressed to Wiener that it could not support the bill unless it was amended to “respect the evolving nature of risk reduction practices while minimizing rigid, ambiguous, or burdensome rules.” Anthropic was the first major generative AI developer to publicly signal a willingness to work with Wiener on SB 1047.
“SB 1047 establishes clear, predictable, common sense legal standards for the developers of the biggest, most powerful AI systems to efficiently build in safety across the AI ecosystem startups build on,” wrote Nathan Calvin, Senior Policy Counsel at the Center for AI Safety Action Fund, the lobbying arm of the Center for AI Safety, which is one of SB 1047’s co-sponsors.
Critics of the bill, however, sounded another alarm, including Andrew Ng, the Stanford professor and former Google executive who detailed his concerns in a post viewed by more than 1 million people on X. “SB 1047 will stifle open source AI and hinder AI innovation,” Ng wrote KQED. “It makes a fundamental mistake of trying to regulate AI technology rather than address harmful applications. Worse, by making it harder for developers to release open AI models, it will hamper researchers’ ability to study cutting-edge AI and spot problems, and therefore will make AI less safe.”
When asked why California lawmakers have pursued dozens of AI bills focused on discrete issues, compared to the European Union and the state of Colorado, which opted for comprehensive legislation, Wiener said, “California’s system is different than these other jurisdictions. We don’t tend to pick a subject matter and do ten different issues combined. We introduce individual bills. We work very hard to harmonize them,”
He acknowledged the pros and cons of this approach.