Home

Donate
News

Oversight Board Flags Human Rights Risks in Meta's Global Community Notes Rollout

Ramsha Jahangir / Mar 26, 2026

A series of posts by Meta CEO Mark Zuckerberg on the Threads social media app, outlining changes to content moderation. Meta said it would scrap its longstanding fact-checking program in the US in favor of a community notes system similar to that on Elon Musk's X. Picture date: Wednesday January 8, 2025. Press Association via AP Images)

Meta's quasi-independent Oversight Board on Thursday issued a policy advisory opinion on the company's plans to expand its Community Notes program globally, warning that a blanket worldwide rollout risks real-world harm in conflict zones, repressive regimes, and countries heading into elections — and that even in the best circumstances, the program's core design may be structurally ill-suited to serve as a platform's primary tool for addressing misinformation.

The advisory, which Meta itself requested last November, comes just more than a year after CEO Mark Zuckerberg announced the company was ending its third-party fact-checking program in the United States and transitioning to a Community Notes model, a crowdsourced approach modeled on X. Fact checks stayed in place outside the US for the time being, though Meta said it eventually plans to roll out Community Notes worldwide.

Thursday's board advisory makes clear that the path to "worldwide" deployment is considerably more complicated than the company may have anticipated.

The regulatory environment Meta is navigating has also grown more complex since it first announced the US rollout. In July 2025, the EU Code of Conduct on Disinformation — to which Facebook and Instagram are signatories — became part of the Digital Services Act, requiring annual audits to ensure adherence to its commitments.

The Board's policy advisory opinion does not tell Meta to halt the expansion. Rather, it lays out a set of human rights criteria the company should use to determine when a country outside the US should be excluded or when launch should be delayed. The list is long.

The Board laid out human rights criteria for when a country should be excluded or a rollout delayed. Exclusions or high caution are recommended in repressive regimes, countries heading into major elections, markets with histories of coordinated disinformation, and regions experiencing armed conflict.

Language complexity, limited internet access and “multi-layered” social divisions — along religious, ethnic, caste or political lines — also warrant extreme caution, even if they do not automatically prevent deployment.

Meta told the Board that the program is in an "early stage of product development" and the company possesses "limited data from the US beta rollout," which the Board appeared to take seriously in calibrating its caution.

The advisory highlights one of Community Notes’ apparent blind spots. A country may have a prominent liberal-conservative axis, but contributors might rate posts based on unrelated factors, such as support for a political party, memes, or a popular soccer player. The algorithm then interprets whichever axis explains ratings best, which may have little to do with actual political divisions.

Another structural limitation is that, at present, Community Notes carry no punitive consequences. Meta confirmed to the Board that there are “no strikes for posting content that receives a community note” and that “the distribution or monetization of such content is not affected.” Even published notes have no effect on reach or engagement, making them inherently weaker than traditional fact-checking labels.

Board Co-Chair Paolo Carozza framed the opinion as a model for the wider industry. “As Meta, X, YouTube, TikTok and other platforms increasingly adopt crowdsourced approaches to address potentially misleading content, they have a responsibility to undertake comprehensive human rights due diligence,” he said in a statement accompanying the press release.

"Meta envisions community notes as its primary way to address misinformation," the Board wrote, but "the program's design may limit its ability to accomplish that goal." Three problems dominate: delays in note publication, the limited volume of notes that actually reach users, and the program's dependence on a reliable broader information environment to function well — a dependence that makes it particularly fragile in precisely the high-risk contexts where it's most needed.

The numbers suggest there is a long way to go. Meta’s chief information security officer said 900 Community Notes were published in the first six months of the US rollout, far fewer than the roughly 35 million Facebook posts the company labeled with fact-checking warnings in the European Union over a similar period.

The timing problem is particularly acute. Research on X finds that, on average, it takes 15 hours for a note to be published, by which time a post has typically reached 80% of its total audience. On the contested topics that pose significant harms and where Community Notes are most needed, disinformation doesn't wait.

Comparable concerns are already emerging in Meta’s early testing. Company disclosures indicate that only a small fraction of notes are ultimately published, reinforcing questions about whether the model can operate at the scale and speed required to meaningfully address harmful content.

There is also the question of labor. Public comments cited in the Board’s advisory noted that writing and rating notes constitutes unpaid “data labor,” often drawn disproportionately from journalists, experts and marginalized communities. The system relies on human input but offers no compensation or formal protections, raising questions about who bears the costs of moderation.

Structural limits of the model

Beyond rollout conditions, the Board raises doubts about whether community notes can serve as a primary misinformation intervention at all. It points to delays in publishing notes, low output volume, and reliance on the broader information ecosystem as core weaknesses.

Those concerns were echoed by the European Fact-Checking Standards Network (EFCN), which welcomed the opinion but argued it confirms deeper flaws in the model.

In a press release in response to the advisory, the EFCSN said community notes are “inadequate as a standalone solution” and cited stark differences in scale and speed compared to professional fact-checking. The group pointed to the disparity between roughly 900 visible Community Notes in the US test period and about 35 million fact-checking labels applied in the European Union over a similar timeframe as evidence of what it called a “systemic failure,” and urged Meta to adopt a hybrid model that prioritizes factual accuracy and human rights.

The group also highlighted publication latency, often measured in days, and low visibility rates, with fewer than 10% of proposed notes reaching users. By contrast, third-party fact-checking labels can be applied rapidly and at scale, with measurable behavioral impact.

It also raised concerns about compliance with the EU’s Digital Services Act, arguing that Community Notes would not qualify as a risk mitigation measure equivalent to the third-party fact-checking program it replaced.

In Meta’s March 2025 transparency report under the EU Code of Conduct on Disinformation, which was formalized as a Digital Services Act code of conduct in July 2025, the company said it would review whether to change its EU fact-checking commitments “in light of changes in our practices, such as the deployment of Community Notes.”

That caveat carries legal weight. For the first time, adherence to the code is subject to independent annual auditing and is used as a benchmark for compliance with the Digital Services Act. Across the platform ecosystem, commitments under the code have already declined, with companies reducing the number of measures by about 31% since 2022.

The EFCSN further cited Meta’s own data showing that 95% of users do not click through after an independent fact-checking warning label is applied, a level of impact that does not have a clear equivalent in the Community Notes system, where relatively few notes reach users in the first place.

What happens next

Meta has not yet responded publicly to Thursday's opinion. Unlike a typical Oversight Board case involving a specific content moderation decision, Meta has no formal obligation to implement any of the group's recommendations, which are advisory.

But Meta has historically complied with a high share of its recommendations — the company has committed to implementing or exploring the feasibility of implementing 80% of the 326 recommendations the Oversight Board has issued as of December 2025.

Board member Carozza has flagged that the rise of generative AI has created new interest from non-Meta platforms in the Board's expertise, with "really preliminary" conversations with other companies underway. If Community Notes continues to spread across the industry as a moderation default, the Board's human rights framework may matter well beyond Facebook and Instagram.

Authors

Ramsha Jahangir
Ramsha Jahangir is a Senior Editor at Tech Policy Press. Previously, she led Policy and Communications at the Global Network Initiative (GNI), which she now occasionally represents as a Senior Fellow on a range of issues related to human rights and tech policy. As an award-winning journalist and Tec...

Related

Perspective
Community Notes Alone Won't Beat Disinformation: Why Fact-Checkers Are EssentialMarch 3, 2026

Topics