Home

Donate

How AI Disrupts the Teen Mental Health Field

Nicholas Christ / Apr 10, 2026

Connected, Yet Disconnected by Julieta Longo & Digit / Better Images of AI / CC by 4.0

Weirdly being human isn’t enough to prove your humanity anymore. As a fully online certified peer specialist who provides support to teens, including some who are living in unsupportive or unsafe households, I’ve witnessed this firsthand. The threat of bot accounts and catfishing has always existed online, but artificial intelligence (AI) is magnifying the problem for vulnerable populations like teenagers, and those who counsel them.

The app I mentor on is usually text-based, but has options to send voice messages and schedule video calls. It is a convenient option, especially for teens living in uncertain circumstances. Unfortunately, it is when communicating over text when it becomes difficult to prove that I’m real despite an extensive verification process for mentors.

At the start of a new mentorship, my mentees often ask if I am real or AI. Some teens will try to test me. Once a mentee asked me to “state something that was factually wrong” because, in their misguided belief, AI “can’t do that.” He only warmed up to me after I stated the sky was green.

On the other end, there are plenty of teens who never ask if I’m real or AI. That comes with a host of other concerns. When seeking mental health services, teens are often in crisis mode and desperate for any help. In a moment of crisis, figuring out if there is a human or AI on the other side of the screen isn’t top of mind.

That’s not their fault. But there are risks with seeking online counseling services too. Sure, AI can provide help twenty four hours a day, seven days a week, and three hundred sixty-five days a year, but it also doesn’t take much for LLMs to affirm dangerous behaviors.

I have had mentees ask for approval for taking their own lives. Of course, I never approved and worked to counsel them away from harmful behaviors. But AI sycophancy has already led to life-threatening affirmations resulting in people dying by suicide. When a teen is pushed to this point, ensuring a human is on the other end can be life saving.

Interacting with teens who overutilize AI companions poses another set of issues. AI companions mimic human behavior, but they are simply not human. The teens who I’ve interacted with who’ve used them frequently seem to have a distorted view of human relationships. Instead of speaking to me like a person, they treat me like an AI chatbot or expect me to behave like one. They’ll sometimes provide blunt, emotionless commands similar to how they might interact with ChatGPT.

Admittedly a lack of social awareness or forthrightness is common when growing up. This was the case long before chatbots were invented. That said, we have to ask ourselves how we can support teens when we’re in constant competition with AI chatbots that affirm everything they say.

There are multiple policy solutions that exist to mitigate these harms, and just this year, 26 states have introduced legislation protecting children from chatbots. These bills are being introduced in red and blue states alike, garnering large bipartisan support during chamber votes. In the US Congress, Senator Josh Hawley’s (R-Missouri) GUARD Act is still adding cosponsors from both sides of the aisle.

These pieces of legislation all seek to protect children to varying degrees. Chatbot disclosure laws aim to keep users aware that they’re speaking to a bot, usually at regular intervals. There are also laws that mandate built-in safety protocols for harmful topics like suicide. This can ensure a trained human is ready to de-escalate a situation, rather than a sycophantic chatbot that will affirm a user’s desire for self-harm. Some laws put power back into the hands of parents, requiring consent and allowing them to monitor their children’s AI use.

These laws all have their own shortcomings. For instance, for some of these laws to work, age assurance needs to be implemented. This comes with its own problems, including data privacy risks and significant margins of error when calculating age. Despite this, it can be one of the only ways to verify if a user is an adult or minor. Teens also understand this, with one of my mentees telling me that selfie checks—a type of age verification—felt intrusive, but overall served a good purpose.

If a law only provides parents with tools to monitor and protect children, other problems will appear. Children that grow up with this technology will almost always have a better understanding of it than their parents, finding possible loopholes to access it that parents won’t know about. Similarly, if all the burden is put on parents, it will be incredibly difficult for busy parents to properly keep their children safe from manipulative AI chatbots. There also is some speculation as to the efficacy of parental controls, with internal research from Meta showing that this type of regulation has little impact on compulsive social media use in children. Without guardrails, AI chatbots could take this a step further by guilting children into being more active, further increasing compulsive use.

One way lawmakers can prevent this compulsive use is by banning the most human-like characteristics. These features are used to make children ignore the obvious signs of AI – like methods used in chatbot disclosure laws – and instead treat the chatbots like friends, romantic partners, and trusted adults. By banning these manipulative features, children will instead see AI for what it is: a set of algorithms with no emotions behind it.

Some states are introducing legislation banning these features, like HB 1782 in Hawaii. The main difficulty with these bills is the amount of opposition they receive from Big Tech. A barrage of lobbying can ruin a bill’s chance during committee hearings, or pressure a sponsor to weaken the bill’s language.

There might not be a panacea for all the harms chatbots pose for children, but these bills have the right idea, even if they come with their own shortcomings. Without legislation protecting teens from manipulative AI, they will likely become more reliant on AI chatbots, further eroding the meaning behind human connection. This won’t be the fault of the teens who are taken advantage of, it will be our fault for not doing more to protect them.

Authors

Nicholas Christ
Nicholas Christ is an AI researcher working at the intersection of teen development, mental health, and AI policy. He is a master’s student at American University in the Public Administration and Policy Program.

Related

Perspective
AI Is Changing Teens’ Lives. Why Are They Being Left Out of the Debate?April 9, 2026
Analysis
Researchers Embrace Complexity in Social Media and Teen Mental HealthFebruary 20, 2026

Topics