Among the urgent topics likely to confront experts is the steady rise of AI-generated fakery but also the tricky problem of knowing when an AI system is so widely capable or dangerous that it needs guardrails.
“We’re going to think about how do we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors,” Raimondo said in an interview. “Because if we keep a lid on the risks, it’s incredible to think about what we could achieve.”
Situated in a city that’s become a hub of the current wave of generative AI technology, the San Francisco meetings are designed as a technical collaboration on safety measures ahead of a broader AI summit set for February in Paris. It will occur about two weeks after a presidential election between Vice President Kamala Harris — who helped craft the U.S. stance on AI risks — and former President Donald Trump, who has vowed to undo Biden’s signature AI policy.
Raimondo and Secretary of State Antony Blinken announced that their agencies would co-host the convening, which taps into a network of newly formed national AI safety institutes in the U.S. and UK, as well as Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the 27-nation European Union.
The biggest AI powerhouse missing from the list of participants is China, which isn’t part of the network, though Raimondo said, “We’re still trying to figure out exactly who else might come in terms of scientists.”
“I think that there are certain risks that we are aligned in wanting to avoid, like AIs applied to nuclear weapons, AIs applied to bioterrorism,” she said. “Every country in the world ought to be able to agree that those are bad things, and we ought to be able to work together to prevent them.”
Many governments have pledged to safeguard AI technology, but they’ve taken different approaches, with the EU the first to enact a sweeping AI law that sets the strongest restrictions on the riskiest applications.