The Council of Europe Shows What Good Platform Regulation Looks Like in 2026
Owen Bennett / Apr 14, 2026
A view of the Council of Europe building in Strasbourg, France. Photo by: Rainer Jensen/picture-alliance/dpa/AP Images
A new set of policy recommendations, unveiled last week by the intergovernmental Council of Europe (CoE), is turning heads in platform regulation circles.
By grounding regulatory design in a human rights approach and radically recasting the concept of online user empowerment, the awkwardly named “Recommendation on the online safety and empowerment of users and content creators” is offering countries and regions a conception of platform regulation that is fit for our times.
In a context of growing global concern (most of which is warranted; some of which is self-serving) regarding the impact of online safety regulation on freedom of expression, the CoE’s Recommendation takes the protection of human rights to be the fundamental problem of contemporary platform regulation.
In that vein, it asserts that online safety protections are essential to ensure everyone can enjoy freedom of expression online, while making clear that freedom of expression requires the acceptance in public discourse of views which might shock, offend, and disturb. It demands restrictions on ‘jawboning’ and a prohibition on the weakening of online security, and it calls for the development of online safety rulebooks to be stakeholder-led and implemented by independent but publicly accountable regulators.
While this rights-focused approach is welcome, the Recommendation’s true policy innovation lies in its reconception of individual empowerment online. Today’s norm-setting content regulation rulebooks like the EU’s Digital Services Act, the UK’s Online Safety Act, and Singapore’s Code of Practice for Online Safety already include provisions that aim at enhancing user empowerment on online platforms. However, these rulebooks typically understand the empowerment of individuals to be a secondary consideration, and one that is complementary to the regulation of platforms as such. Put another way, they fashion the object of platform regulation to be safety and accountability first; empowerment second. What sets the CoE Recommendation apart is that it inverts this approach, and instead makes the empowerment of users and creators the starting point of platform regulation.
To that end, it urges states and platforms to implement empowerment principles and tools that go well beyond the state-of-the-art rulebooks. Particularly notable in this regard is its call for large platforms to facilitate ‘middleware’-based models of content curation and moderation; its focus on collective redress mechanisms to bridge asymmetries between platform operators and user complainants; and its demand that regulatory requirements for age assurance be grounded in a ‘best interests of the child’ legal standard.
And while it might seek to position itself as a ‘model’ framework, the Recommendation does not emerge from a vacuum. In view of its European heritage, it borrows much from the paradigmatic rulebooks that have emerged from the European continent in recent years. It affirms the importance of due diligence requirements of platform operators and the necessity of asymmetric obligations, in an obvious nod to the DSA’s conception of systemic risk management for Very Large Online Platforms. Like the DSA, it insists on the basic legal protections that have underpinned the development of the internet globally, namely intermediary liability safe harbors and prohibitions on general monitoring obligations (even as some of CoE member states have notoriously disavowed that heritage in recent years). And it gives further validation to the provisions of the EU and UK rulebooks that are truly pioneering, like the expectation that large platforms cooperate with public interest researchers.
Why it is worthy of norm-setting status
In 2026, human rights and the rule of law are under assault across regions, and democracy and civic space are — where it exists — shrinking. In such a context, new online platform regulations can be as much an instrument of oppression as a pathway to empowerment.
Proposals for model regulatory frameworks must acknowledge and address themselves to this reality. By explicitly framing human rights principles and safeguards as core building blocks of platform regulation that cannot be sacrificed or foregone in implementation, the CoE’s Recommendation recognizes and responds to this risk.
This gives it a crucial edge over other regulatory frameworks that might compete for the mantle of model regulatory framework, like the DSA, the OSA, or Brazil’s new Digital Statute for Children and Adolescents (ECA Digital). Each of these frameworks has been developed out of, and in response to, a set of idiosyncratic socio-political, constitutional, and economic conditions that pertain to their respective time and place, and many of the rights and rule of law protections that shape them are exogenous to the rulebooks themselves. In that context, the CoE’s Recommendation sends a clear message to states contemplating platform regulation: any regulatory interventions that target online platforms must be situated within a robust human rights framework.
Sounds great on paper, but so what?
Recommendations of the Council of Europe are just that — recommendations. But what they lack in legal effect, they compensate for in normative force. And as an instrument of normative best practice for platform regulation, this new Recommendation is a useful instrument for practitioners in Europe and beyond.
For one, the fact that it is the Council of Europe that has created this policy framework gives it credence above and beyond any jurisdiction-specific rulebooks. The elaboration, protection, and promotion of human rights (largely through law and regulation) is the institution’s organizing principle, and as such, its pronouncements on rights-respecting regulation carry weight. As an intergovernmental body, it convenes over 40 states from the European continent, and grants observer status to four states in the Americas and Asia. States, for better and worse, remain the ultimate sources of organizing power and legitimacy on this planet, and when they come together and agree on things, it means something.
Substantively, the Recommendation is a welcome reminder to jurisdictions that have developed the seeming standard bearers of platform regulation — like the EU, UK, Brazil, and others — that their approaches are not the be-all and end-all of regulation. Indeed, the CoE’s Recommendation, particularly its provisions on user empowerment, provides pointers on how today’s well-respected approaches to online platforms could evolve over time. And for governments, regulators, and the broader policy community in jurisdictions that are today contemplating online safety regulation for the first time, the Recommendation provides a worthy model through which novel regulatory frameworks can be simultaneously impactful and rights-respecting.
Ultimately, at a time when governments and regulators in the EU and elsewhere are reluctant to be seen to be engaging in regulatory export of their rulebooks, and when the language of rights and universal values is being crowded out in international relations, the Recommendation offers a model for what platform regulation can look like across different material conditions. And as such, it is very much fit for our times.
Disclosure: The author made minor contributions to the work of the Committee of Experts on online safety and empowerment of content creators and users (MSI-eSEC), which developed this Recommendation. The views expressed here are the author’s own, and not of the Council of Europe, its member states, or the Recommendation’s drafters.
Authors
