Home

Donate
Perspective

AI Is Changing Teens’ Lives. Why Are They Being Left Out of the Debate?

Nico Fischer, Vicki Harrison, Caroline Figueroa / Apr 9, 2026

Teens in a circle holding smart phones. (Shutterstock)

A majority of teens in the United States are regularly using artificial intelligence chatbots such as ChatGPT, Copilot and Character.ai, with most using it for homework and 12% for advice or emotional support, according to a recent Pew Research Center survey. While the latter might seem like a small percentage, projected out it would represent roughly 3 million US teenagers who use AI chatbots not just for school but as their emotional confidants.

AI companies are already facing lawsuits from parents who allege their teens died by suicide after forming emotional attachments to chatbots. In response, as of February 16, dozens of chatbot-related bills have been introduced across at least 27 states, with many focused on protecting children.

Policymakers are rightfully alarmed about their growing popularity among teens. But in nearly every high-level conversation we have witnessed, across intergovernmental agencies, universities and advocacy groups, the most affected voices remain underrepresented: young people themselves.

AI policy for youth will fail unless youth help design it.

We are youth-serving academics and a high school student. Through our ongoing research, which includes interviews and discussion sessions with dozens of teens and AI developers, we see how deeply AI is reshaping adolescence for many teens.

Teens tell us they ask AI for help with schoolwork as well as how to navigate conflicts with friends or romantic relationships, behave in professional meetings and deal with anxiety and depression. Some younger teens only started using AI recently. Just like other research has shown, they often turn to AI instead of a friend, parent or professional, and in many cases prefer the AI’s advice. At the same time, AI developers have reported struggling to keep up with all the changing and nuanced ways teens are using AI tools.

Recent policy proposals, such as California’s SB243, focus on the most visible risks: suicide, self-harm and sexual content. These dangers are critical because systems like ChatGPT currently fail to detect serious mental health warning signs, like suicidal ideation, and encourage false perceptions of emotional relationships.

But there are also less visible risks at play, like AI’s subtle influence on values, expectations and decision-making during a crucial developmental period. If we fail to listen often and deeply to young people, regulations will not capture the full extent of how AI systems affect them.

US senators have proposed bans on AI chatbots for minors over safety concerns. However, most young people have told us that bans alone won’t work. As digital natives, many would get around potential age-verification systems and could be pushed into using more poorly designed tools that could be even worse in terms of safety.

New York’s S5668 mandates that AI companion services notify users when they are interacting with AI. But many young people tell us this is easily ignorable. Instead, they advocate for AI companies to nudge them towards human connection or offline activities throughout AI interactions. We have also heard from teens that, while they don’t think AI services should replace therapists, these tools could be helpful to them when they can’t easily reach a psychologist, or when they are burdened by mental health stigmas. Policymakers should take these perspectives seriously.

Young people can and should help us come up with innovative design ideas and help adults understand the potential ethical dilemmas at hand. For example, what will teens do if you take away a new accessible source of support without replacing it with appropriate care? Without youth input, we may unwittingly lose the ability to build safe systems through bans without any other risk mitigation measures.

What should policymakers do?

Young people alone may not have all the technical and mental health expertise needed to guide complex policy decisions; we also need input from mental health professionals and ethicists.

However, involving young people can lead to better policy and health outcomes, as well as helping young people develop skills and competencies. We can apply successful youth co-creation models to AI governance. For example, in youth mental health care, models like allcove in California — where Vicki Harrison serves as program director and Nico Fischer has served as a youth advisor— and headspace in Australia are built on the understanding that youth lived-experience informs better experiences for all. That same principle must now guide AI policy development.

Young people should be involved from early stages of agenda-setting and draft development to implementation and evaluation of any AI policy that may affect their well-being.

Policymakers can operationalize this by using participatory methods, such as establishing teen advisory councils, co-design sessions, using digital platforms to gather young people’s opinions on a large scale, inviting more youth to testify before committees and partnering with schools or community organizations.

Youth participation should reflect a broad range of ages, lived mental health experiences, cultural backgrounds and gender and sexual identities, with appropriate compensation and safeguards. Policymakers can also partner with youth-led coalitions that already have youth participation structures in place, like GoodforMEdia, Design It For Us and the Center for Youth and AI. (Vicki Harrison founded and co-leads GoodforMEdia.)

We know young people are using these new AI tools and we want to work towards a future where they can do so safely. To anticipate and eliminate the potential harms from these powerful new AI technologies, we need to start making decisions not for youth but with youth.

Authors

Nico Fischer
Nico Fischer is a high school senior in the San Francisco Bay Area. He is a youth advisor to the Stanford Center for Youth Mental Health and Wellbeing and chair of the County of Santa Clara Youth Task Force.
Vicki Harrison
Vicki Harrison, MSW is Program Director of the Stanford Center for Youth Mental Health & Wellbeing, where she directs a portfolio of programs, co-designed with youth, promoting early intervention and increased access to mental health support for young people.
Caroline Figueroa
Caroline Figueroa is a Commonwealth Harkness Fellow, a visiting assistant professor at Stanford University and Hopelab, and Faculty at Delft University of Technology in the Netherlands. She studies the responsible development of AI for youth mental health.

Related

Perspective
The Youth Online Safety Movement Needs to Respect Children’s AutonomyNovember 21, 2025
News
UK Seeks More Powers Under Online Safety Act to Tackle AI HarmsApril 8, 2026

Topics