upper waypoint

California Lawmakers Take On AI Regulation With a Host of Bills

Save ArticleSave Article
Failed to save article

Please try again

A white man in a blue suit and tie gestures as he speaks in a congressional room surrounded by people.
Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16, 2023, in Washington, DC. The committee held an oversight hearing to examine AI, focusing on rules for artificial intelligence.  (Win McNamee/Getty Images)

It’s been eight months since Sam Altman, CEO of OpenAI, the outfit that gave us ChatGPT, urged U.S. senators to please pass new laws to force accountability from the big players, like OpenAI investor Microsoft, as well as Amazon, Google and Meta. “The number of companies is going to be small, just because of the resources required, and so I think there needs to be incredible scrutiny on us and our competitors,” Altman said in May of 2023.

Yeah, no. That’s not what has happened.

“I would love to have one unified, federal law that effectively addresses AI safety. Congress has not passed such a law. Congress has not even come close to passing such a law,” said Democratic State Senator Scott Wiener of San Francisco, one of a growing number of California lawmakers rolling out legislation that could provide a model for other states to follow, if not the federal government. Wiener argues his Senate Bill 1047 is the most ambitious proposal so far in the country, and given that he was just named Senate Budget chair, he is arguably the best positioned at the state capitol to pass aggressive legislation that is also well-funded.

SB 1047 would require companies building the largest and most powerful AI models — not the wee startups — to test for safety before releasing those models to the public. What does that mean? Here’s some language from the legislation as currently written:

“If not properly subject to human controls, future development in artificial intelligence may also have the potential to be used to create novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.”

AI companies would have to tell the state about testing protocols and guardrails, and if the tech causes “critical harm,” California’s attorney general can sue. Wiener says his legislation draws heavily on the Biden administration’s 2023 executive order on AI.

Catch up fast: By software industry alliance BSA’s count, there are more than 400 AI-related bills pending across 44 states, but California’s size and sophistication make the roughly 30 bills pending in Sacramento most likely to be seen as legal landmarks, should they pass. Also, many of the largest companies working on generative AI models are based in the San Francisco Bay Area. OpenAI is based in San Francisco; so are Anthropic, Databricks and Scale AI. Meta is based in Menlo Park. Google is based in Mountain View. Seattle-based Microsoft and Amazon have offices in the San Francisco Bay Area. According to the think tank Brookings, more than 60% of generative AI jobs posted in the year ending in July 2023 were clustered in just 10 metro areas in the U.S., led far and away by the Bay Area.

The context: The FTC and other regulators are exploring how to use existing laws to rein in AI developers and nefarious individuals and organizations using AI to break the law, but many experts say that’s not going to be enough. Lina Khan, who heads the Federal Trade Commission, raised this question during an FTC summit on AI last month: “Will a handful of dominant firms concentrate control over these key tools, locking us into a future of their choosing?”

Sponsored

The big picture: By now, you’ve probably gotten the memo: Large AI models are everywhere and doing everything — developing new antibiotics and helping humans communicate with whales, but also turbocharging election-season fraud and automating hiring discrimination. In 2023, many world-leading experts signed a statement on AI Risks — “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” it reads.

What we are watching: There are at least 29 bills pending in Sacramento alone in the 2023–2024 legislative year focused on some aspect of artificial intelligence, according to Axios. More are expected to roll out in the near future, which is why the following list is a partial one.

The opposing view: “While I think that these types of regulatory guidelines are good, I’m not sure how effective they will be,” said Hany Farid, a UC Berkeley School of Information professor specializing in digital forensics, misinformation, and human perception.

The bottom line: Farid added, “I don’t think it makes sense for individual states to try to regulate in this space, but if any state is going to do it, it should be California. The upside of state regulation is that it puts more pressure on the federal government to act so that we don’t end up with a chaotic state-by-state regulation of tech.”

“We can’t have a patchwork of state laws,” agrees Grace Gedye, an AI Policy Analyst at Consumer Reports. But, she added, “We definitely can’t hold our breath [for Congress to act] because we could be waiting 10 or 20 years.”

lower waypoint
next waypoint