upper waypoint

San Francisco Takes on Makers of AI-Generated ‘Deepfake’ Pornography in Landmark Lawsuit

Save ArticleSave Article
Failed to save article

Please try again

City Attorney David Chiu speaks during a press conference at City Hall in San Francisco on Aug. 15, 2024, about a lawsuit against websites that create and distribute non-consensual AI-generated pornography. (Beth LaBerge/KQED)

Updated 1:55 p.m. Thursday

As artificial intelligence booms, the technology has also given rise to “deepfake” pictures, including manipulated images of young, nude women and children.

Nonconsensual pornographic photos of anyone, including celebrities like Taylor Swift and pre-teens in California, can be generated in a few clicks. In February, faked nude pictures of 16 eighth-grade girls went around a Beverly Hills middle school, prompting expulsions of five fellow students accused of making them.

The movement has left many people, including San Francisco City Attorney David Chiu, “horrified.” On Wednesday, he announced that his office had filed a groundbreaking lawsuit against 16 of the largest websites that create and distribute nonconsensual AI-generated pornography, setting up a major test of the laws that currently govern the burgeoning technology.

Sponsored

“We have to be very clear that this is not innovation — this is sexual abuse,” Chiu said in a statement shared with KQED. “This is a big, multi-faceted problem that we, as a society, need to solve as soon as possible. We all need to do our part to crack down on bad actors using AI to exploit and abuse real people, including children.”

Chief Deputy City Attorney Yvonne Meré first brought the issue to Chiu this year after seeing news coverage of young girls who were targeted by these deepfake images.

She was “horrified and fearful thinking of my own 15-year-old daughter and how she would feel if her autonomy was stripped from her, her image distorted, her privacy wholly disregarded,” Meré said during a press conference Wednesday. “And as a lawyer I was frustrated. How can it be that this pernicious practice can go on?”

Deputy City Attorney Yvonne Meré speaks during a press conference at City Hall in San Francisco on Aug. 15, 2024, about a lawsuit against websites that create and distribute nonconsensual AI-generated pornography. (Beth LaBerge/KQED)

The suit, which Chiu’s team believes is the first government lawsuit of its kind, hopes to stamp out websites that allow users to create “nonconsensual sexually explicit images” or “undress” women — and, in some cases, children.

“The real novelty here is that they’re focusing on the companies that create this stuff and not individuals,” Jennifer King, the privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, told KQED.

While individuals targeted by falsified nude images have pursued legal action before, the goal is generally to get the images scrubbed from the internet or hold the person who created them accountable.

“The real difference here is that San Francisco’s going after the actual companies that enable the creation of the material,” King said.

The companies being sued by San Francisco took no effort to hide the nonconsensual nature of the explicit material their users could create, according to the city attorney’s office. One of the websites says: “Imagine wasting time taking her out on dates when you can just use [website] to get her nudes.” Another asks users: “Have someone to undress?”

The sites make use of open-source generative AI models that are available to the public to adapt and train on specific content, according to the city attorney’s office.

“Even where the creators of these open-source models subsequently incorporate safeguards into new releases of the model, earlier releases — and fine-tuned versions trained to generate pornographic content — continue to circulate online,” the complaint reads.

Chiu’s office alleges the companies violate state and federal laws against deepfake, revenge and child pornography. While several states have proposed or enacted legislation to criminalize such nonconsensual AI-generated images, the lawsuit asks the San Francisco Superior Court to order the sites to shut down.

Whether this is possible remains to be seen, said professor Colleen Chien, co-director of the Berkeley Center for Law and Technology.

In bringing the complaint in state court, the city attorney’s office alleges that the website operators engaged in unlawful and unfair business practices in San Francisco and elsewhere in California. Chien expects the defendants to fight the notion that their violation of laws in San Francisco’s jurisdiction can force them to shutter worldwide.

Even if the suit is successful, King said, it likely cannot prevent the creation of this kind of material by private users, since the technology already exists. But it could set a new legal framework for fighting the issue. The design of the suit will test how effective existing laws surrounding AI are at blocking the companies that make this technology.

“What this highlights is that though there have been a flurry of new laws, what it might actually need is more law enforcement,” Chien said. “These laws that they’re drawing upon have been out there, but it’s not proven how much they will actually provide protection.”

California was one of the first states to pass anti-deepfake legislation in 2019, before the current frenzy over AI: one bill dealing with pornography and the other with political elections. Currently, state lawmakers are set to decide Thursday whether Assembly Bill 1831, which would expand the scope of these provisions to include material altered or generated with AI, lives to face a floor vote in the Senate or dies. They’ll also decide if Senate Bill 1047, a far-reaching proposal that would require developers of the largest AI models to safety test their technology, will go to an Assembly floor vote or be killed.

The San Francisco complaint targets two U.S. companies, one based in England and two based in Estonia, as well as a resident of Estonia and 50 unnamed John Doe defendants whose true identities are not yet known. All operate websites that produce nonconsensual AI-generated images; the entities have been visited a combined 200 million times through the end of June, according to the city attorney’s office.

“While the defendants are all over the place — they’re in England, they’re in Estonia and they’re in other places — the plaintiffs are in California,” Chien said. “And you also obviously have the biggest platforms sort of releasing the open source tools that are underlying these businesses in California.”

In addition to shutting down the sites, the city attorney’s office is seeking a court order for the defendants to pay the cost of the lawsuit and a civil penalty of $2,500 for each violation of state law against unfair business acts and practices.

“It’s really tough and it’s really unfair. It just shouldn’t be possible,” King said. “This is being done to teenage girls at school. … It impacts them in real life in a very focused way. It’s not just like, ‘random people think I’m naked on the internet.’ It’s my entire peer group.”

KQED’s Rachael Myrow contributed to this report.

Sponsored

lower waypoint
next waypoint