upper waypoint

AI Companions Can Be a Seductive Risk for Teens. Senators Want More Guardrails

Save ArticleSave Article
Failed to save article

Please try again

U.S. Sens. Alex Padilla (center), Tina Smith (left) and Thom Tillis at the launch of the bipartisan Senate Mental Health Caucus in October 2023. Padilla is one of two caucus members calling for tighter regulation of AI chatbots aimed at teenagers. (Courtesy of U.S. Sen. Alex Padilla)

After Megan Garcia’s 14-year-old son died by suicide last year, she said she thought he was spending the bulk of the time on his phone “talking to friends, playing games, looking at sports: the regular things that teenagers do on their cellphones.”

Instead, the Florida teen was having conversations with an artificial intelligence chatbot — and growing emotionally connected to it. Speaking to “CBS Mornings” shortly after suing Bay Area-based Character.AI (C.AI) and Google last fall, Garcia said she was blindsided by the intensity of the interactions her son had with C.AI’s chatbot shortly before his death.

“I didn’t know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment,” said Garcia, who is a lawyer. “It makes me sad that this was my child’s first experience being in love or romance. That’s saddening to me.”

Garcia blames C.AI for her son’s death.

Sponsored

This week, U.S. Sen. Alex Padilla, D-Calif., co-founder of the bipartisan Senate Mental Health Caucus, and Sen. Peter Welch, D-Vt., sent letters to the CEOs of the companies behind C.AI and two other leading AI chatbots, Chai and Replika, urging them to do more to ensure their products do not contribute to self-harm or suicide among young users.

Although a couple of the companies recently announced new safety features, Padilla and Welch insist the reliability of these systems is unclear. That’s even though surveys show teens are looking to AI for answers about their personal lives, not just help with homework.

“The synthetic attention users receive from these chatbots (e.g., streams of expressive messages, sycophantic and agreeable responses, AI-generated selfies, and convincing voice calls) can, and has already, led to dangerous levels of attachment and unearned trust,” the senators wrote in the letters to Character Technologies of Menlo Park, Chai Research of Palo Alto, and Luka of San Francisco.

“Policymakers, parents, and their kids deserve to know what your companies are doing to protect users from these known risks,” the senators wrote, “given that young people are accessing your products — where the average user spends approximately 60–90 minutes per day interacting with these AI chatbots.”

AI companionship apps tend to be more permissive than better-known general apps like ChatGPT, Claude and Gemini. That’s because companionship app users are often looking to engage with them as sexual and/or romantic partners. On the Character.AI subreddit, it doesn’t take a long search to find questions like: “How many of you here use character.ai for loneliness? I’ve had no friends or social life for about 10 years and rarely leave my house, character.ai has really helped me feel a little bit better.”

On Character.AI, users can create their own chatbots and give them directions about how they should act. They can also select from chatbots created by others that mimic historical figures and celebrities. The Florida teen, for instance, used a bot mimicking the “Game of Thrones” character Daenerys Targaryen.

“AI companions are kind of a sleeper issue for a lot of Americans,” said Danny Weiss, the chief advocacy officer for Common Sense Media. “Many parents don’t even know that their kids might be developing relationships with machines.”

Chelsea Harrison, head of communications at Character.AI, told KQED that the company welcomes working with regulators and lawmakers and has been in contact with Padilla’s and Welch’s offices.

“Over the past year, we’ve rolled out many safety features on the platform, including Parental Insights, which provides parents and guardians access to a summary of their teen’s activity on the platform,” Harrison wrote.

Harrison added that the company serves a separate experience to teenagers “that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.”

While it’s unlikely Democratic lawmakers will be able to move the Republican-led Congress to regulate AI, Weiss applauded Padilla and Welch for drawing attention to the issue and noted the dozens of AI-related bills introduced in Sacramento have a greater chance of making it into law.

“Right now, there are no guardrails on artificial intelligence companions,” Weiss said. “That is ridiculous. This technology is amazingly powerful. It’s seductive. It’s exciting.”

Assemblymember Rebecca Bauer-Kahan, in partnership with Common Sense, introduced AB 1064, which would establish a standards board to assess and regulate AI technologies used by children.

Senate Bill 243, introduced by Sen. Steve Padilla, D-San Diego, will be heard by the Senate Judiciary Committee this Tuesday. The measure, which Common Sense also supports, would require chatbot operators to implement critical safeguards to protect users from the addictive, isolating and influential aspects of AI chatbots.

Ahead of the hearing, Padilla will promote the bill with a press conference, where he’ll be joined by the bereft Florida mother, Megan Garcia.

To help a young person who may be struggling with depression or anxiety, dial 988 to reach the national Suicide and Crisis Lifeline.

lower waypoint
next waypoint