Home

Donate
Perspective

Does YouTube’s Algorithm Reward Risky Prank Content?

Dylan Moses / Mar 25, 2026

Prank content on YouTube can look like the Wild West sometimes. During my time on the platform’s Trust and Safety team, focused on YouTube’s harmful and dangerous content, I saw firsthand how far creators would push limits, from the bizarre to some of the most hateful, in pursuit of social clout. Many of those videos required swift removal from the platform to minimize the potential for “real-world harm.”

From my perspective, the removal principle was simple: allowing the spread of content where people risked serious physical injury, or death leads to an overall poor user experience, erodes trust in the platform, and creates all sorts of regulatory challenges. But there was always a quieter, less defined concern underpinning that work — the possibility that engaging users for clicks encourages others to do the same. Now, as a California jury weighs claims that social media platforms may contribute to eating disorders, isolation, and depression among minors, that once background question feels harder to ignore. How far does “potential” for real-world harm really reach?

Prank content isn’t inherently dangerous. For example, creators like Zachcray and Fred Beyer are largely nuisance pranksters who will walk into Walmart or a Home Depot, locate the nearest intercom and find ways to disrupt the store’s operations. While annoying for the managers (but often funny for the viewer), they are generally harmless and tend to rack up tens of thousands of views in a matter of hours. But the “danger” in prank content isn’t always the prank itself — sometimes it’s the potential for escalation. Creators like TopNotchIdiots will often go beyond the Zachcrays and Fred Beyers in a quest for social clout. They will engage unsuspecting third parties in personally invasive pranks in urban settings or marginalized communities and provoke an aggressive, potentially dangerous response from the pranked person. Here, too, though, the content ultimately appears harmless and can normally rack up a hundred thousand views in a day.

The reason both sets of content are likely allowable under YouTube’s Harmful and Dangerous policies is pretty straightforward. These creators’ provocations aren’t depicting violence in any meaningful sense. Moreover, they also receive a lot of attention and inspire other creators to make similar types of content for engagement on the platform. Yet, while that logic might explain why their content isn’t removed, it doesn’t seem to capture the reality that both sets of creators' provocations exist in an ecosystem that rewards escalation. The kind of escalation that can plausibly lead to dangerous results.

After the last 20 years of trust and safety post-mortems (and improvements), is it responsible to promote content in a way that predictably incentivizes real-world harm?

YouTube’s standards

YouTube’s mission is to “give everyone a voice and show them the world,” a principle that inevitably creates tension when certain forms of expression carry real-world risk. The platform’s Harmful and Dangerous Content Policy attempts to draw that line by prohibiting content that encourages activities posing a serious risk of physical harm or death, including pranks that make participants believe they are in imminent danger. At the same time, YouTube allows for exceptions where content has educational, documentary, scientific, or artistic value.

In practice, though, enforcement isn’t just guided by removal. Since 2019, YouTube has emphasized the Four Rs of responsibility: removing violating content, raising authoritative sources, reducing borderline material, and rewarding trusted creators.

Together, these frameworks help explain why much prank content remains on the platform. Nuisance-style pranks typically do not trigger removal because they do not create a credible sense of imminent physical danger. More aggressive, confrontational pranks come closer to that line, but can still remain if they are interpreted as having artistic or comedic value, particularly when no actual harm occurs and the interaction is ultimately revealed as staged or performative.

This logic explains why the content is not removed or even reduced. But it leaves an equally important question unanswered: if the policy framework permits this content to remain, what justifies its amplification?

Pranks, attention economy and micro-escalation

Content on YouTube usually doesn’t need a justification for promotion. When you look at YouTube’s mission statement — “giving everyone a voice” and “building community through our stories” — the justification for content promotion is self-executing. Creators want to share (and users want to see) culture unfolding as it exists in everyday life. This is how YouTube, a platform with nearly 3 billion monthly active users, can credibly say it is the “epicenter of culture” — it is one of the largest and oldest marketplaces for freedom of expression on the Internet.

Prank content has a home in that ecosystem. While not necessarily always the highest form of culture, comedic pranks are integral to many cultural rituals around the world. Their utility lies in their potential to increase social bonds, create shared joy and improve social reciprocity amongst community members. Importantly, pro-social pranks are “more commonly an effort to bring a person into a group.”

But pranks performed by creators like Fred and TopNotch don’t serve that pro-social purpose. Instead, the person being pranked is essentially a prop for algorithmic engagement. Said another way, the whole point these pranks serve is to disrupt and provoke unsuspecting people in department stores, libraries, and in marginalized communities to gain likes, comments, subscriptions, and views on their content. YouTube, in its role as the “epicenter of culture,” recommends and rewards this artistic-cultural expression.

This is the hallmark of the so-called “Attention Economy.” First, platform companies like YouTube, Facebook, Snap, and TikTok establish the marketplace for creative expression. Then, they establish the rules for the production, distribution, exchange, and consumption of both creators’ content (content/Ads policy) and users’ attention (algorithmic ranking, recommendations). From there, creators and advertisers all engage in “rivalrous” competition for user attention. That rivalry brings a particular risk with prank content that platform policies don’t immediately appear to account for.

Prank content creators ranging from Fred to Deda Mac to TopNotch perform micro-escalations – the use of incrementally novel and provocative stunts to gain attention – to increase their standing on YouTube. Initially, the way they do that is by performing similarly disruptive and provocative pranks. However, like many engagement-based systems, the novelty of the prank plays a critical role in the platform’s recommendation system. Think about it: users don’t want to see the same walkie-talkie pranks over and over again. They want something new. They want creators to “show them the world” of prank content. This invariably culminates in a platform-enabled creators’ arms race, where one creator performs micro-escalations to outshine other creators. Viewed in this way, Creators engaging in micro-escalations from walkie-talkie pranks, to fake “thefts” and “kidnappings,” to provoking dangerous altercations with unsuspecting individuals isn’t simply comedy or art — it’s a response to the conditions set by the market’s regulator.

So, what justifies promoting this kind of content on YouTube? The answer might simply be: it’s not immediately harmful, there’s some comedic value to it, and it’s what users want to see. But the risk in standing behind that justification is to passively endorse the increasing escalation towards potential for real-world harm.

Platform companies certainly don’t need to justify why or how they promote prank content. As the Supreme Court recently explained in Moody, when platforms exercise discretion over whether to remove or promote certain content, they are “engaged in [expressive] activity.” The law generally protects that activity and that’s a good thing. But it’s unclear if the law protects platform companies’ expressive activity when it’s alleged that their activity foreseeably leads to real-world harm.

Courts appear to be grappling with this question. In recent years, federal courts in Florida, as well as in the Third and Ninth Circuits, ruled that platform companies could be held liable for the deaths of minors allegedly caused by these companies' engagement and recommendation systems, despite their bedrock Section 230 and First Amendment defenses. And as mentioned at the outset, Meta and YouTube are now facing a jury trial to decide whether the companies were legally responsible for causing mental health disorders in a minor due in part to their personalized recommendation feeds.

Which brings us back to prank content from creators like Fred and TopNotch. If the cases above are surviving traditional legal defenses that have buttressed platform companies’ affirmative design choices for decades, that suggests legal liability could foreseeably attach if the promoted content does, in fact, lead to real-world harm. At that point, courts will likely be central to investigating what YouTube’s role was in promoting it, and what steps it took to limit the micro-escalation that the platform’s architecture predictably accelerates.

“Allowing” vs. amplifying

YouTube’s policies draw a defensible line. Prank content is allowed so long as it does not clearly create a risk of imminent serious harm, and even then, exceptions exist for content framed as artistic or comedic expression. That framework explains why this material remains on the platform.

But it does not explain why it is promoted.

The distinction matters because the degree of promotion that the platform exercises ultimately changes its responsibilities. A platform that merely permits content is different from one that actively prioritizes, distributes, and rewards it at scale. And in a “regulated” environment where attention is the currency and provocative escalation is a creator’s competitive edge, even promoting “allowable” content can increase the likelihood of real-world harm

YouTube does not need to justify its editorial decisions as a matter of law, and there are good reasons to be wary of any regime that would require it to do so. But as legal challenges mount and courts take a closer look at recommendation systems, the question is shifting from whether platforms are allowed to make editorial decisions to whether those decisions foreseeably incentivize behavior toward the edge of real-world harm.

That question will have to be answered one way or another.

Authors

Dylan Moses
Dylan Moses is an Internet lawyer who focuses on law and regulation at the intersection of the First Amendment, Section 230, and emerging technologies. He is an affiliate with the Berkman Klein Center for Internet & Society at Harvard University and a Founding Fellow with the Integrity Institute. Dy...

Related

Tech Companies Have the Ability to Combat Dangerous Conversion “Therapy” Disinformation and Save LivesJanuary 23, 2024
Perspective
A Safety by Design Governance Approach to Addressing AI-Facilitated Online HarmsAugust 25, 2025
Podcast
Considering Trust and Safety's Past, Present, and FutureNovember 30, 2025

Topics