Ericka Cruz Guevarra: And now some are calling on the California State Legislature to regulate this technology. Today, we’re bringing you an episode from our colleagues at the Political Breakdown podcast, KQED. Scott Shafer: speaks to Jonathan Mehta Stein, head of Common Cause, which is leading efforts to regulate AI. Stay with us.
Scott Shafer: And welcome back to Political Breakdown. I’m Scott Shafer, and as we enter this critical election year, we’re going to be paying a lot of attention here on the podcast to threats to our democracy and specifically threats to election integrity. And to help us understand the threats posed by things like artificial intelligence or AI and what California and the nation need to do urgently.
Scott Shafer: We’ve invited someone who’s been thinking a lot about this and whose organization is proposing some solutions. Jonathan Mehta Stein: is executive director of California Common Cause. They just launched a new project called the California Initiative for Technology and Democracy, cited for short. Jonathan, welcome to Political Breakdown.
Jonathan Mehta Stein: Thanks for having me.
Scott Shafer: Well, let me begin with that initiative. Tell us a little bit about it. First of all, is it a nonprofit organization and what is it going to be doing?
Jonathan Mehta Stein: Right now, it’s a project of California Common Cause, which has worked on voting rights, redistricting, money in politics, a full suite of democracy issues for many, many years. And we just realized that working on a full suite of democracy issues in this digital era would be incomplete if we weren’t tackling the threats posed to our democracy by AI, disinformation, deepfakes, and so forth. So we are, beginning a critical election year, and it will probably be the first AI election.
Jonathan Mehta Stein: What I mean by that is that generative AI, deepfakes, fake audio, fake video, fake images, fake text will be inundating our information ecosystem. We’re already seeing some of this. We’ve seen disinformation have passed. Certainly that’s not new. But what we’re seeing now is disinformation turbocharged by new technological tools that allow anybody foreign states, non-state actors, online trolls, campaigns themselves to put out incredibly convincing content meant to deceive voters or destabilize elections.
Jonathan Mehta Stein: And we’ve already seeing it. The Slovakian presidential election was impacted by deepfakes. The Bangladeshi, presidential election, impacted by deepfakes, were beginning to see it in some elections and in the United States. There’s a DeSantis campaign ad in the Republican primary with a fake photo, a deep fake photo of Trump hugging Fauci. There’s other examples from within the United States, and the American public is just not ready.
Scott Shafer: Going back to those, foreign elections in those two countries you mentioned. Do you have a sense of who was behind that and why? I mean, do you I mean, because I saw as you were describing that, I’m thinking, oh, that’s just like a little pilot project. It’s on a test it out somewhere where no one’s paying attention before we bring it to the United States.
Jonathan Mehta Stein: So to give you an example, in the Slovakian presidential election, shortly before Election Day, fake audio emerged of one of the two presidential candidates saying, actually, one very serious thing and one what we might think of a silly thing, the very serious thing was that they were attempting to rig the election. The silly thing was they was planning on raising taxes on beer. But both clearly, you know.
Scott Shafer: Important to people.
Jonathan Mehta Stein: That’s right. Yeah. And both clearly were meant to destabilize and to influence voters. The intent really I think is, is they’re it’s a dry run in many respects for bigger elections occurring around the world this year. And then ultimately in November, the presidential count in America.
Scott Shafer: So what you mentioned earlier, the deep fakes, this turbocharging of, misinformation, disinformation, give us a like a 1 or 2 concrete examples of how that might look.
Jonathan Mehta Stein: Right. So a, a deep fake targeting a candidate is relatively straightforward. We just mentioned examples of, the DeSantis campaign or this instance from Slovakia. You might think of, for example, a deepfake of Joe Biden falling down the stairs of Air Force One to make him look silly. That’s, I guess playful. That’s a bad word for it. But there’s much more dangerous stuff out there
Jonathan Mehta Stein: So imagine a robo call in Joe Biden’s voice going to millions of voters on the eve of election, telling them their voting locations have changed. Or right. If we move on and move out of the realm of candidates and into things that can destabilize trust in elections, imagine a fake video of an elections official, quote unquote, caught on tape saying that their voting machines can be hacked, or that vote by mail ballots are not secure.
Jonathan Mehta Stein: If you are a conspiracy theorist trying to attack the legitimacy of American elections, you can create confirmatory evidence false, but confirmatory evidence that everyone else will believe with a few clicks of a button. These things are really easy to produce in the modern era. The barriers to entry are really low and the costs are near zero.
Scott Shafer: And so I want to come back to some of those specifics, but the the idea of this initiative is to do what work with lawmakers in Sacramento to come up with some guardrails, some regulations.
Jonathan Mehta Stein: Because this technology is so new, there is no well-developed policy field teeming with solutions and experts that can help policymakers move in this area. So cited. The California Initiative for Technology and Democracy is an attempt to bring together tech leaders, finance, VC law, public policy, communications, campaign folks, experts from a variety of fields to build an. Interdisciplinary hub of expertise that can advise lawmakers and regulators as they attempt to move.
Jonathan Mehta Stein: Why are we doing this in California? Because Congress isn’t able to take meaningful action to protect our democracy in this moment. And so it turns to California to lead the country. We’ve done this before in the past. Look at data privacy, where our bill is now being recreated in other states. Look at automobile emissions, where our choices drove nationwide change. We can lead this issue in California. We just need Sacramento to build out its.
Scott Shafer: So you’re talking about yeah creating this like think tank kind of thing to help advise lawmakers. But you know you can easily imagine just take tech different agendas. I mean, how are you going to get people to agree on what the regulations should be or even what the issues are?
Jonathan Mehta Stein: That’s true in every major policy field, right? You’re going to have a variety of stakeholders who are all going to have really strongly held views. We have to bring everybody into the process. This has to be a joint effort, including the tech companies, including the legislature, including civil society, including national experts who frankly don’t have all that much going on at Congress right now and are really happy to help California figure out the way, this is going to take a group and as I really importantly, interdisciplinary effort in order to get this done right.
Scott Shafer: You know, yes, we are in the age of AI now. But, you know, the Obama campaign in 2008 was very, insightful and using cutting edge at the time, technology to reach voters, using text messages and that sort of thing. Obviously, Donald Trump, used it in 2016. So are you just saying this is part of an evolution or are we in a whole new era?
Jonathan Mehta Stein: I would say we’re in a whole new era. The fact of the matter is that that disinformation, those tech tools were really, really effective at helping people reach voters. And that’s that’s actually a useful and positive function for AI going forward. I think there are good, absolutely positive uses of AI in this space, including helping elections officials find new efficiencies in election administration, helping under-resourced campaigns reach voters more effectively, helping GOtv efforts, target voters more effectively.
Jonathan Mehta Stein: So we don’t want to disrupt any of that, but the ability to deceive voters and to destabilize our information ecosystems is a quantum leap from anything we have seen in past elections. And the real fear, Scott, is that people will begin to not know what. Images, audio, text that they can trust and they retrench into tribalism. They say, I’m going to start believing everything that confirms my biases, and I’m going to reject as fake anything that challenges them.
Scott Shafer: Yeah. And of course, we’re living now in this media ecosystem where, you know, maybe in the past and media organizations would debunk things right away. We can’t really count on that. In this, in this moment that we’re in, you say in this white paper that you recently published, Jonathan, about this problem that you’re describing, with AI, etc., is really extremely, most extreme, particularly extreme at the state level as opposed to the national level. Why do you say that?
Jonathan Mehta Stein: Right. So the white paper is called Democracy on Edge in the Digital Age Protecting Democracy in California in the era of AI powered disinformation and unregulated social media. The reason why it’s particularly extreme in California or in the state level is because at the federal level, we have a number of major institutions in civil society, nonprofits and think tanks and so forth that have invested themselves in building this expertise over time.
Jonathan Mehta Stein: And at the state level, you have a emaciated, policymaking, a regulatory infrastructure that can assist policymakers. Now, that may sound curious in California, where we may.
Scott Shafer: It that’s like an interesting.
Jonathan Mehta Stein: Word. We have this enormous amount of Tex-Mex expertise in California, right, that the the tech companies that have driven so much innovation and so much productivity are located here. The tools that are posing a threat to our democracy in March were created here. And yet policymakers in Sacramento most often have to go to the tech industries, trade industry associations and its lobbyists when it has questions about regulating the tech industry.
Jonathan Mehta Stein: And too often the answer is that self-regulation will solve the problem. So what we need in California, and frankly, we need in every state but California has the opportunity to lead, is beginning to build this interdisciplinary expertise that can provide unbiased expertise to lawmakers as they try to take positive action.
Scott Shafer: Does that word unbiased trouble you at all? Because we all have biases, right? And, you know, I just wonder, I mean, I don’t know who’s who’s choosing the people that are going to be part of this. Maybe it’s you.
Jonathan Mehta Stein: It’s a great point. What we mean by unbiased is. Informed by the tech industries, business models and needs, but independent of industry and not beholden to any private stakeholders.
Scott Shafer: But some of them will be from the industry.
Jonathan Mehta Stein: Absolutely. Part of our advisory councils include former and current tech executives who can advise us on how to get regulation right, but they’re balanced by law school deans and civil rights experts and campaign professionals, and a whole host of other folks that can create that interdisciplinary aspect we’re looking for.
Scott Shafer: What do you think we’ve learned from the regulation or lack of regulation or self-regulation of social media? You know, things like Facebook and Twitter that can be applied or need to be applied right now.
Jonathan Mehta Stein: It’s such a great question. We are accustomed in this country to the idea that if you as an industry, create products that are a danger in some way to us, as productive or helpful as they may be, they pose a danger to us in some way. The airline industry, the pharmaceutical industry, any, food production makers of home electronics. They are accustomed to regulation, inspection, testing.
Jonathan Mehta Stein: They have to make sure that their products are good for people, or at least won’t harm them before they go to market. With tech, there is no similar expectation from the industry, from government or from the public. It is time, I think it is clear. It is time for the era of totally unregulated tech to come to an end and for the industry, for government and for civil society to work together to figure out the best way to use these tools.
Jonathan Mehta Stein: There is data upon data at this point showing, for example, teen mental health is being disastrously affected by social media platforms. No one is looking at those impacts and how to mitigate them. Before products are released. We have to look. We have to reexamine our assumptions in this area.
Scott Shafer: You are you often hear I’ve heard Governor Newsom say, well, we have to have regulations, but we don’t want to stifle innovation. So how do you thread that needle?
Jonathan Mehta Stein: There’s a lot of needles to thread in this particular case. We have to balance the limitations of section 230 at the federal level. We have to balance.
Scott Shafer: What is that.
Jonathan Mehta Stein: Section 230 at the fed is a federal law says that it was a choice made in the 90s by Congress that says that tech platforms or social media platforms cannot be held accountable for what is posted on their platform. So if you run a blogging site and I put on your blogging site instructions on how to make a bomb, I would be the one held accountable, not you. You are just a platform. Okay, well, what that means is you can’t hold even in state law.
Jonathan Mehta Stein: You can’t hold tech in the tech industry or social media platforms accountable for disinformation or hate or whatever the case may be. You encourage them to moderate that stuff or to fact check that stuff, but you can’t hold them accountable. So that takes a whole set of tools off the table. The First Amendment requires us to respect free speech. And then there’s innovation. We really don’t want to. I mean, Governor Newsome is right about that.
Jonathan Mehta Stein: We don’t want to stifle innovation. And there are ways, as I mentioned earlier, that I can be used to actually make elections more effective or to make GOtv more effective. So we have to figure out how to walk through this very complicated obstacle course. And the white paper released last week is actually providing that road map.
Scott Shafer: We love to say California is, you know, as Newsom says, you know, this is where the coming attractions for the rest of the country, right? And nobody knows anything until we do it. But I’m wondering, are there things and you do mention in the white paper, like, I think Texas, Michigan, maybe there are things happening in other states. And then there’s the EU, the European Union, which is really much more aggressive on these kinds of things, not just on tech, but all kinds of consumer related things. What are you learning? What is there to learn from there?
Jonathan Mehta Stein: EU the EU is really where everyone should be looking for the most thoughtful, policymaking in this area. They are years ahead of the United States. I can speak to what’s going on in other states, but, by and large, what the states are looking at right now is, prohibition on political deepfakes. Usually targeting candidates because politicians are the ones passing these bills, and they’re most sensitive to deepfakes that target them.
Jonathan Mehta Stein: And so, what we’re seeing coming out of the states is you can ban or you can label political deepfakes that target candidates within a certain amount of time before an election. The EU is doing something or attempting to do things that are considerably more sophisticated and considerably more robust. And one idea that they really like is, and will make part of the AI act is requiring generative AI platforms to embed within their AI tools, provenance markers, which is sometimes called watermarking.
Jonathan Mehta Stein: The idea that if they create, let’s say, a synthetic, that’s a nicer way of saying fake video, the user who reads it or views it should be able to click on, an embedded link or something of that kind that shows them what AI tool created. This will confirm, first of all, that it’s synthetic.
Scott Shafer: Wouldn’t it be better just automatically have that pop up? You’re asking people to to take an extra step?
Jonathan Mehta Stein: Absolutely, yes. So there’s a lot of ways that you can handle this. One way that we’re interested in is, creating the best possible watermarking. So you have this information about where a video was created, who created what tools were used in creating it, available to the viewer immediately. But there are concerns about visible watermarks putting something on the surface of an image or a video because they can be photoshopped off, but then even more worryingly, they can be photoshopped on to real content.
Jonathan Mehta Stein: And then, so, suspicion about a real video or real image. And so what we’re interested in is, imperceptible embedded metadata in a generative AI content. And then using that watermark, will require social media companies to flag for their users posts that include an image or video or text that has, as they know, because of the watermark they know has been synthetically created or it’s inauthentic or fake.
Scott Shafer: Jonathan, so much of what you’re saying and what you’ve written in this white paper is terrifying, basically. And I’m wondering what gives you hope or what would what which would give us hope.
Jonathan Mehta Stein: This problem has existed for many, many years. Social media has been largely unregulated. It has been, I think, declining in terms of the quality of democratic discourse in those spaces. We’re seeing more and more evidence that it’s impacting our wellbeing and our mental health, particularly among teens and teen girls. We’re finally at a moment where there is a critical mass, and critical interest in taking action. So we have an opportunity. That’s what gives me hope. We have an opportunity to take action for the very first time.
Scott Shafer: And, you know, here we are. It’s January 2024. We have a primary in March, a big election in November. Is it possible but also necessary to get some of this done by then before then?
Jonathan Mehta Stein: Yes and yes. And one of the things that aids us is the fact that the public is wildly in support of taking action. So there was a poll from the Berkeley Institute. Yes, yeah, from IGS in November that showed that 84% of Californians indicate that they are concerned about the problem, about about the impact AI and disinformation may have on this year’s elections.
Jonathan Mehta Stein: And that includes over three fourths of every possible group men and women, all regions, all races, all ages, and importantly, all political parties. And a similarly enormous majority of Californians think it is the responsibility. That’s a, quote, the responsibility of state government to take action to fight back these threats. You don’t see unanimity like that on any issue in America today. We have the public behind us. Yeah.
Scott Shafer: All right. Well, thank you so much for flagging all of this and working on it. And I think we all, based on your poll results, we all are three quarters of us. Anyway, really hope you’re successful this year. And, you know, the sooner the better. Thank you so much for joining us. Jonathan Mehta Stein: from California common cause.
Jonathan Mehta Stein: Thanks, Scott.