upper waypoint

California's Most Contested AI Bill Is Up for a Vote

Save ArticleSave Article
Failed to save article

Please try again

Senator Scott Wiener introduced SB 1047, a bill that would mandate safety testing for developers of the largest AI models. (Russell Yip/San Francisco Chronicle via Getty Images)

Remember when, in March of 2023, more than a thousand technology leaders and researchers warned that generative AI could pose “profound risks to society and humanity”? Two months later, some of the industry’s biggest backers promised international leaders they’d be game to build necessary safeguards. A similar pledge was made last month at the White House.

Now, some prominent figures in the AI industry, including many who have raised safety concerns, are dismissing those worries as science fiction as they pitch an open battle against SB 1047, a bill that would mandate safety testing for developers of the largest AI models. That measure is on the state Assembly floor for a vote after passing through the Assembly Appropriations Committee along party lines.

Proponents of AI have warned that SB 1047 could stifle the growth of the technology in California. And they’ve rallied three Silicon Valley Congress members — Rep. Zoe Lofgren, Rep. Ro Khanna and Speaker Emerita Nancy Pelosi — to aid their efforts to kill the bill.

“The view of many of us in Congress is that SB 1047 is well-intentioned but ill informed,” Pelosi wrote in an open letter published last week.

“We’re simply requiring these labs to perform the safety testing that they have repeatedly and publicly committed to perform,” said state Sen. Scott Wiener (D-San Francisco), who introduced the bill and is expected to run to fill Pelosi’s seat once she retires, possibly against her daughter Christine Pelosi.

Sponsored

Wiener said he amended the measure to reflect counsel from leaders in the AI space, including safety groups, academics, startups and developers like Amazon-backed Anthropic. A key amendment is that the bill no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. Also, the original text, which would have established a division within the California Department of Technology “to ensure continuous oversight and enforcement,” is gone.

Wiener, who heads the Senate Budget Committee, told KQED the changes were made in large part to improve the bill’s chances with lawmakers and Gov. Gavin Newsom, given that California is facing a massive budget deficit.

But, he added, “I’m not interested in passing a symbolic bill. I would not agree to amendments that would make it weak.”

Last month, Anthropic warned in an open letter addressed to Wiener that it could not support the bill unless it was amended to “respect the evolving nature of risk reduction practices while minimizing rigid, ambiguous, or burdensome rules.” Anthropic is the first major generative AI developer to publicly signal a willingness to work with Wiener on SB 1047.

Related Stories

Microsoft told KQED it has not taken a position on the bill. Robyn Hines, senior director of state government affairs, wrote that the company would prefer federal legislation but “will continue to work with Senator Wiener and others on legislation that will help harness AI’s full potential and advance sensible safety and security protections.”

In recent months, Wiener’s measure has received public criticism from respected industry voices like Andrew Ng, the Stanford professor and former Google executive who detailed his concerns in a post viewed by more than 1 million people on X, the social media app formerly known as Twitter.

“I’m glad that we’re exploring paths to the make the bill less bad because a bad bill is better than a very bad bill, but I feel that fundamentally, by regulating hypothetical risks of the technology rather than concrete risks of actual applications, I don’t think this bill will make AI safer,” Ng told KQED.

SB 1047 would only affect companies building the next generation of AI systems that cost more than $100 million to train. However, critics like Ng, who is the managing general partner of AI Funds, argue the mere threat of legal action from the state attorney general will discourage big tech companies from sharing open-source software with smaller ones.

“Here in Silicon Valley, do we want startups spending money to write code and build products for people? Or do we want those startups hiring lawyers and hiring auditors and hiring consultants to clarify legal ambiguity to guard against purely hypothetical risks?” Ng said.

There is a substantial faction of the AI community, though, that is concerned about hypothetical risks. In an open letter, four prominent tech policy thinkers praised SB 1047. “Relative to the scale of risks we are facing, this is a remarkably light-touch piece of legislation,” they wrote.

Technology Coverage

“We have a lot of really eminent scientists who are worried and are taking these risks seriously,” said Nathan Calvin, senior policy counsel at the Center for AI Safety Action Fund, the lobbying arm of the Center for AI Safety, which is one of SB 1047’s co-sponsors.

People don’t need to believe the “AI-terminator-wakes-up-and-kills-everyone scenarios are plausible in order to support this bill,” Calvin said, adding that generative AI is capable of relatively banal catastrophic impacts, like enabling hackers and foreign states to take down critical infrastructure or designing and deploying bio-weapons.

Supporters of Wiener’s bill say opponents are asking Californians to trust Silicon Valley to self-regulate despite a demonstrated history of failing to do so, especially in regard to data privacy and monitoring hate speech on social media.

“Innovation and safety are not mutually exclusive,” Wiener said. “We can do both, and the public wants us to do both. The public wants the benefits of AI and wants to reduce the risks of AI, and that is exactly what SB 1047 does.”

lower waypoint
next waypoint