upper waypoint

Parents, Educators 'Out of the Loop' as Teens Experiment With Generative AI, Study Finds

Save ArticleSave Article
Failed to save article

Please try again

Close-up female hands with a blue manicure using pink smartphone outdoors.
Research shows that about 70% of teens use at least one kind of AI tool. (Tatiana Meteleva/Getty Images)

New research from the nonprofit Common Sense Media reveals that teenagers are using a wide range of AI tools to study and help with their homework, whether or not the grown-ups in their lives are aware of it. However, the research also finds that when schools and parents engage with children and educate them about the benefits and risks of AI, those children take note.

The report, The Dawn of the AI Era: Teens, Parents, and the Adoption of Generative AI at Home and School, is based on a nationwide survey conducted from March to May of 1,045 adults who are parents or guardians of one or more teens aged 13 to 18, as well as responses from 1,045 teens.

Sponsored

“We really wanted to first start with the landscape, which is, what are adolescents doing? What platforms are they using, and what is the purpose to which they put them?” said Amanda Lenhart, head of research at Common Sense, which examines the impact of technology on young people. “Then we wanted to drill a little bit more specifically into educational uses. So how are they using it for school? How are teachers talking about it? How are schools talking about it with parents?”

This study did not talk to educators, but it sheds refracted light on several key educational questions through the experiences of parents and teenagers.

About 70% of teens have used at least one kind of AI tool. A little over half — 51% — have used chatbots or text generators like ChatGPT, Microsoft Copilot and Google’s Gemini, Common Sense found. More than half (53%) of students say they use AI for homework.

“They don’t have the guidance to know what it is they’re supposed to be doing,” Lenhart said. “Thirty-seven percent say they don’t even know if there are rules at their school. That leaves parents who are starting to walk into experimenting and using AI in their own lives really at sea about what their kids are doing.” She added that about a quarter of parents don’t think their kids are using AI, even though those very same children told Common Sense that they are.

“I’m super sympathetic to the challenges that teachers and administrators find themselves in. But we need to talk about this.”In response to the Common Sense Media study, teacher Melissa Donnelly-Gowdy, who is currently at River City High School in West Sacramento, wrote KQED that she may be referencing it during her English 11 rhetoric unit next term.

“I use AI everyday — mainly ChatGPT or Google Search AI in class with students and to prepare lesson plans, differentiate instruction, or get sample writing based on my assignments,” wrote Donnelly-Gowdy, who has been teaching for more than 15 years. “I want students to use AI because it’s not going away, but I also want them to grow in their integrity and critical thinking skills.”

Donnelly-Gowdy said earlier this week students could use ChatGPT if they wanted or needed to in a hybrid assignment for a Modern Barbie Doll Pitch. She said most of the students used AI, but that a few “more storyteller-inclined” students created their own stories. She explained to them this was one example of how AI could be used as a tool and a starting point for assignments.

Addressing concerns about AI being used to cheat, Donnelly-Gowdy said, “I have graded AI work and passed it through because I couldn’t find a way to prove it, but for the most part AI work is easy for me to identify. Students are going to use it to cheat, just like they have been using other things to cheat.”

Related Stories

Students who did have guidance reported that they approached AI tools with more skepticism. Fifty-five percent of the teens who reported they talked about AI’s benefits and pitfalls in school fact-checked the information they received because they learned about generative AI’s tendency to “hallucinate” or deliver inaccurate or biased information. That compares with 43% of teens who had not discussed AI’s strengths and weaknesses.

Another pair of surprising insights: Black students are substantially more likely than their white or Latinx peers to have their writing incorrectly flagged as the work of AI, but parents of Black teens are also more optimistic about the potential for generative AI to soften systemic inequalities in education.

Black teens were more than twice as likely as white or Latino teens to say that teachers flagged their schoolwork as being created by generative AI when it was not (20% vs. 7% and 10%), according to the study.

Despite this clear indicator of bias, “A larger percentage of Black parents are more likely to say, ‘I think this is going to help my child with skill acquisition. I think it’s going to help them in their future career paths.’ They just generally have a more positive sense of what AI can do,” Lenhart said.

lower waypoint
next waypoint