upper waypoint

Ahead of 2024 Election, This New California Institute Wants to Fight AI Disinformation

Save ArticleSave Article
Failed to save article

Please try again

A video screencap showing Barack Obama with pixellated multi-colors on his face.
A 2018 image from a deepfake video featuring former President Barack Obama shows elements of facial mapping used in creating deepfakes. Launched this week, the California Institute for Technology and Democracy will provide lawmakers with recommendations on countering the impacts of AI, deepfakes and disinformation on the 2024 election. (AP Photo)

One year ahead of the 2024 presidential election, California Common Cause, a nonprofit, nonpartisan good government advocacy group, has launched the California Institute for Technology and Democracy (CITED), to counter the impacts of AI, deepfakes and disinformation.

“It makes sense that the first such effort is in California, a state that is home to the largest technology companies in the world, but also a state that has a track record of leading the nation in technology policy regulation,” said Ishan Mehta, media and democracy program director at the national branch of Common Cause in Washington D.C., during a Tuesday news conference.

CITED, which claims to be the first organization of its kind at the state level, wants to serve as an information hub, recommending policies to state and congressional lawmakers and highlighting what online tools could be used to spread disinformation, especially during election seasons.

“Much of the recent policy focus has been on the concerns of the use of AI tools for national security and law enforcement purposes, and I think rightly so. But I think now it’s also time for us to focus on how these same tools can be misused to improperly influence and manipulate our democratic processes. Whether real or not, they can pose a threat to the integrity of elections,” said Angélica Salceda, director of the ACLU of Northern California’s Democracy and Civic Engagement Program.

Sponsored

In the run-up to the 2020 election, disinformation ran riot on social media platforms. Facebook ads targeting Latino and Asian American voters described Joe Biden as a communist. Doctored images showed dogs urinating on Donald Trump campaign posters.

But over the last year, major platforms have gutted their content moderation teams, a shift many civil society advocates decry, especially ahead of what’s expected to be a contentious presidential election.

“We can all imagine a scenario where AI is used to target limited English-speaking voters and spread false information about polling locations or voting opportunities. Even without the use of AI, we’ve seen some campaigns use these tactics,” Salceda said. “Now imagine these same tactics super-charged.”

The disinformation landscape includes altered videos and generative AI, which has streamlined the creation of deep fakes – like this one featuring the actor Morgan Freeman. Or this one featuring Florida Gov. Ron DeSantis.

“What we will see in the next year ranges from the silly stuff that’s not that silly — maybe Joe Biden falling down the stairs of AirForce One — to deeply pernicious, perhaps audio of an elections official ‘caught on tape’ saying that vote by mail ballots aren’t secure,” said Jonathan Mehta Stein, executive director of California Common Cause and a CITED board member.

More Silicon Valley Coverage

Last month, Meta’s oversight board announced it would review whether the Menlo Park-based social media giant chose poorly when it left up an altered video that suggested Biden is a “sick pedophile.” The video appeared to show the president repeatedly touching the chest of his adult granddaughter and kissing her on the cheek.

In response, the company wrote in a blog post: “Meta determined that the content did not violate our policies on Hate Speech, Bullying and Harassment, or Manipulated Media, as laid out in our Facebook Community Standards, and left the content up.”

“Personally, when I find myself listening to a campaign video or a TV ad, I wonder whether any aspect of those videos, including voiceovers, are AI-generated,” Salceda said. “I don’t have a trained eye or ear to know the difference right now, and I wouldn’t be surprised if my experience was reflective of the average voter.”

CITED Director Drew Liebert, who served as chief of staff for former Senate Majority Leader Bob Hertzberg, noted the new institute doesn’t intend to exclude Silicon Valley from the policy discussion around AI. “We also very much intend to work, as best we can, with the tech platforms, to see what we can potentially do collaboratively,” Liebert said.

lower waypoint
next waypoint