Home

Donate
Perspective

Regulate Companies, Not Children

Salil Tripathi / Apr 24, 2026

In late March 2026, two juries in the United States delivered what many parents, child rights advocates, lawyers, human rights experts, and politicians had long been demanding: accountability from social media platforms. In California, a jury found Meta and YouTube liable on all counts in a landmark trial in which the plaintiffs’ lawyers argued that the platforms were addictive and caused severe harm, an argument that had been used successfully in earlier cases against tobacco companies.

Unlike other forms of media, social media platforms have largely operated without safeguards. Television programming, films and video games are subject to age ratings and, in some cases, time-based restrictions on what children can see, or when, with content assessed on a case-by-case basis. But social media remains a largely unregulated space. While platforms have age restrictions for joining, enforcement is weak and children routinely bypass these barriers. As a result, children are exposed to the full spectrum of content without the protection long considered standard in other media environments.

The jury concluded that both companies had negligently designed their platforms to be addictive, finding they knew the dangers and failed to warn users. The platforms were ordered to pay a combined $6 million in compensatory and punitive damages to a single plaintiff, a woman now 20, who began using Instagram at 9, and YouTube at 6.

This verdict followed a case in New Mexico, in which Meta was ordered to pay $375 million for failing to protect children from sexual predators on Facebook and Instagram. For the more than 1,600 plaintiffs in linked cases, including families who lost children, these verdicts felt, in the words of one parent outside the Los Angeles courthouse, like “a complete validation of what we've been screaming from rooftops for years.”

The cases have been rightly compared to the legal reckoning with Big Tobacco in the 1990s. As was the case then, the current revelations point not only to corporate negligence, but to calculated disregard. Both cases found that the companies knew and understood that young users were at risk and paid little attention to the consequences.

It is a feature, not a bug

The backlash against Big Tech over children’s well-being has been building for some time. The core accusation is not simply that social media can cause harm, but that products are intentionally designed to lure young people and keep them engaged. Infinite-scrolling feeds, auto-play, and curated recommendations determined by algorithms draw users in. Each successive reel, story, or video, and push notifications, is often edgier than the last. The aim is to maximize engagement, and hence advertising dollars, regardless of users’ age or potential vulnerability.

Scientific evidence confirms that children’s frontal cortices are still developing, limiting their capacity to assess material or understand and contextualize what they see. Society restricts children from marrying, voting, driving, drinking, smoking, seeing certain films, joining the military, buying weapons, or engaging in sexual activity until they reach a certain age. But online, virtually all content is accessible, given how easy it is to bypass age restrictions. Dangerous content, including content that glorifies “thinspo,” which promotes eating disorders such as anorexia or bulimia, can normalize self-starvation. While content on some platforms can nudge young people towards suicide ideation, including suggesting how to carry it out.

Access to children’s data compounds these harms. Platforms harvest and monetize detailed behavioral profiles, including information about what content triggers emotional responses. Children and their parents are rarely fully informed, let alone able to meaningfully consent.

Even more concerning, the New Mexico verdict specifically focused on Meta's failure to prevent sexual predators from using Facebook and Instagram to target children. This was not a rare case. Law enforcement agencies worldwide know of the industrial scale of online child sexual exploitation. The French case involving Telegram highlights the role that encrypted messaging can play in enabling such activity.

Chatbots as predators

Social media is not the only frontier. The emergence of AI companion chatbots has introduced a new and even more intimate vector of harm. One study shows that three out of four teenagers use AI companions for regular conversations. In recent years, there have been cases in the United States and Europe of children dying by suicide after interactions with such systems. In one case, a chatbot failed to direct the child to counseling sites or help when he expressed suicidal thoughts the 14-year-old later died. A 13-year-old girl also died in a separate case. In another instance, ChatGPT was reported to have offered help to a young boy to draft a suicide note; he later died by suicide.

A Stanford academic has warned that chatbots are not capable of responding appropriately when users express serious safety concerns, such as suicidal ideation. As is the case with many other companies in other industries (think of oil and gas, or the garment sector), AI companies have reacted only after crises threaten their reputations. Character.AI announced a sounder safety policy, but only more than a year after a young person died by suicide following months of intense emotional and troubling interactions with a Character.AI chatbot.

The liberating potential of the internet

To be sure, the internet can be liberating. Prohibition and blanket bans are blunt instruments. The online world can be profoundly useful for children. For millions of children living in societies that severely restrict individual rights, the internet can offer a window to a more open and inclusive world. Teenagers grapple with new emotions, including stress, physical changes of puberty, relationships, sexual and gender identity, anxiety, and loneliness. In more traditional societies or authoritarian settings, they may not receive support from adults or their community. At such times, the internet becomes their lifeline. Stopping children from accessing it altogether is profoundly wrong.

Yet this reality underscores a central problem: who is responsible for ensuring children’s safety online? The US approach, of leaving it almost entirely to platforms to self-regulate, shielded by Section 230 of the Communications Decency Act, is failing to protect children. The California and New Mexico verdicts show the courts stepping in because the legislators won’t. Many existing and proposed laws are too broad and risk preventing children from accessing practical, useful information. Australia has banned children from accessing social media such as Facebook, even though the platform has pages and profiles that offer accurate information and advice, and free speech advocates are critical of such moves. They have also cautioned against sweeping regulations. As Michael O’Flaherty, the European Commissioner for Human Rights, puts it, regulate platforms, not children.

The European Union’s Digital Services Act aims to create a “gentler” internet for young people, following several cases of suicide in Europe as well. The DSA requires platforms accessible to children to implement age verification, set minors’ accounts to private by default, ban profiling-based advertising to minors, and conduct mandatory risk assessments. In July 2025, the European Commission published detailed guidelines expanding these obligations. Since then, TikTok has removed a rewards program that was addictive. However, the European Parliament also has allowed a law to lapse, which enabled companies to scan content on their platforms to prevent child sexual exploitation, citing privacy concerns. The EU needs a more coherent and coordinated approach to address the issue.

The DSA is not perfect — its guidelines are non-binding, and enforcement remains uneven. But it represents what effective regulation needs: specific obligations, public accountability, and real consequences.

What comes next

Jury verdicts, however significant, cannot transform systemic behavior. A $6 million award is, to Meta and YouTube, a rounding error. What is needed is coordinated action from governments, companies, and civil society.

Governments must go further than the courts, which are stepping in because the state hasn’t acted. The lesson of California and New Mexico is not that litigation is the answer, but that legislators have left courts to do work that only they can do properly. The starting point is legally binding standards for platform design, age verification, algorithmic transparency, and data protection for minors. The DSA is a model. It is not yet perfect, but it happens to be the most serious attempt yet that others can emulate, adopt, and strengthen.

Regulation also demands teeth: independent bodies with genuine authority to audit compliance and investigate harm in real time, not merely after children have died. In the United States, the Kids Online Safety and Privacy Act (commonly known as the Kids Online Safety Act) has stalled in the Senate even though it has bipartisan support (the latest version has 75 co-sponsors). This should not be an insurmountable problem. Rather, it reflects a failure of political will. And when a company’s revenues run to hundreds of billions of dollars, civil penalties alone are not going to be sufficient. Criminal liability for executives who suppress evidence of harm should be on the table.

Companies cannot continue to treat content moderation as a line item to be managed or a compliance task to be tick-boxed. OpenAI's own logs in the Raine case showed a system that flagged a child's distress, but allowed the harmful conversations to continue. It was moderation in form, not substance. When evidence emerges, keeping a paper trail is one thing; acting in response to the warning signs is what is required. Accounts that groom, exploit, or radicalize children must be suspended swiftly and permanently. They should not be reinstated quietly after appeals that outlast the news cycle. The people who understand these harms best are children's rights organizations, mental health professionals, teachers, and young people themselves; they must be consulted at the stage of product design, and not when the product is being piloted or tested to refine safety policies that may not be able to prevent harms or deaths.

Civil society has an important role to play in accountability. Independent research, free from tech company funding or influence, must continue to document harm rigorously. Platforms need to be repeatedly tested in different contexts, algorithmic behavior tracked, and the findings made public. An informed public is the most durable check on corporate power.

Children have an unequivocal right to access the internet. The task is to regulate companies, not children: supervise the algorithms, set boundaries on data, place real safeguards on product design, and hold companies accountable. The jury verdicts show one way forward. Now, politicians must act, and civil society must demand accountability.

Authors

Salil Tripathi
Salil Tripathi is a Senior Advisor at the Institute for Human Rights and Business. Salil's expertise includes human rights themes such as discrimination and technology related to corporate responsibility. He is also a Senior Associate of the University of Cambridge Institute for Sustainability Leade...

Related

Analysis
Early Lessons from Australia's Teen Social Media Ban for the Rest of the WorldApril 1, 2026
News
State Mandated Social Media Warning Labels Open New Front in Battle Against Tech CompaniesJanuary 27, 2026
Perspective
The Age of Age Restrictions Poses Policy Dilemmas for Kids Online SafetyDecember 22, 2025

Topics