Home

Donate
Analysis

The EU's Age Verification Fix Creates More Problems Than it Solves

Joana Soares / Apr 21, 2026

European Commission President Ursula von der Leyen, right, and European Commissioner for Tech Sovereignty, Security and Democracy Henna Virkkunen speak during a media conference at EU headquarters in Brussels, Wednesday, Apr. 15, 2026. (AP Photo/Omar Havana)

The European Commission last week unveiled an age verification app it says will help platforms confirm whether users meet minimum age thresholds — a move officials presented as closing a long-standing loophole in the bloc's digital rulebook. But security researchers and civil society groups say the tool has significant vulnerabilities, and that the broader policy push may be solving the wrong problem.

For Ursula von der Leyen, President of the European Commission, platforms now have “no more excuses.” The system is meant to be simple. Users upload an identity document, such as a passport or national ID, or use a “QR code” issued by a third party, such as a school or a bank, to attest to their age. Then, proceed to a face scan so the app can check if the face matches the document and send a yes or no confirmation. Officials argue that this minimizes data sharing by relying on “zero-knowledge” proofs, allowing age to be confirmed without revealing underlying personal information.

But the reality is more complex, as the process requires identity proofing and biometric data being shared at least once.

Goldmine for hackers

Early weaknesses in the App have already been identified. Paul Moore, a security consultant, shared a video hacking the app in two minutes. It also quickly became clear how the system could be circumvented by using a VPN to mask the location.

Deeper structural concerns emerged about how the system is designed. A technical analysis by Dibran Mulder, Chief Technology Officer at Caesar Groep, found that key checks, including passport verification and face matching, are performed entirely on the user’s own device, while the issuing authority simply accepts the result. In practice, this means the system trusts whatever the app reports without independently verifying it. Mulder describes this as a “trust boundary” problem, as “the verification happens in an environment the user controls (their phone), but the result is trusted by a system that the user shouldn't be able to manipulate”.

The consequence is that the app can be “trivially bypassable” with basic tools. In one example, a user could modify the app or intercept its data to submit a fake birth date, and the issuer would still generate a valid age credential because it has no way to confirm whether the passport check actually took place.

Nevertheless, security risks extend beyond the implementation. Hanna Bözakov, Managing Director at Tutao, the company behind Tuta email, described the system as a potential “goldmine” for identity theft and phishing. “The more data these systems collect, the more attractive they become to hackers,” she told Tech Policy Press.

“This kind of obligation compels all adults to hand over sensitive and exploitable data simply to access websites. This creates substantial risks of data breaches, hacking, and extortion,” added Joan Barata, international human rights expert and Visiting Professor at the School of Law at Católica University in Porto.

At the same time, critics point to a deeper paradox. While “EU policymakers have tried to hold Big Tech accountable and now, we are building systems that rely heavily on them to make age verification work,” said Eva Simon, Senior Advocacy Officer and Head of the Tech and Rights Program at the Civil Liberties Union for Europe. Because these systems depend on smartphones and operating systems, “companies like Google are essential gatekeepers.”

The Commission's app is one of several technical approaches being tested across European countries, each involving privacy trade-offs. Most countries are considering limiting minors' access to social media, typically between the ages of 13 and 16. Only Estonia and Belgium have resisted that direction, with the first arguing that rather than banning young people, Brussels should make stronger use of existing tools such as the GDPR and platform regulation, while also investing in digital literacy.

The system would apply to any network hosting age-restricted content where verification is currently absent or inadequate — including adult websites that rely on self-declaration. Enforcement is split between Brussels, which oversees the largest platforms under the Digital Services Act, and national authorities responsible for smaller ones, a division that risks uneven application across the bloc.

France is one of the most assertive voices pushing for the regulation. Emmanuel Macron has backed a full ban on social media for children under 15 and age verification already applies to adult content websites, requiring users to prove they are over 18 rather than self-declarating. Just a day after the launch of the age verification app, Macron hosted a call with EU leaders and the European Commission in an effort to align national and EU action.

The United Kingdom is also considering an under-16 social media restriction as part of a broader package that includes curfews, screen-time limits, and constraints on addictive design features. Last week, it hosted the Global Age Assurance Standards Summit in Manchester, focused on implementing a framework of international standards setting out principles for age assurance systems.

Barata describes this policy trend as “techno-legal solutionism,” the idea that complex social risks can be addressed through technical fixes alone. In this view, limiting access addresses only one dimension of the problem. It does not tackle the systems that shape user experience, including recommendation algorithms, engagement-driven design, and content amplification.“We are focusing on access, but the real issue is how platforms capture attention and push content. That is where harm comes from,” added Simon.

For Czech MEP Markéta Gregorová, the process is being “rushed under political pressure more than actual safety concerns,” while failing to address the real issue: platforms built on “addictive algorithms, aggressive business models favoring virality, and massive data collection over safety.” In her view, the Commission should focus on enforcing existing rules under the DSA rather than “requiring us to undermine our privacy to access the internet.”

Also, Thomas Lohninger, Executive Director at epicenter.works said to be “deeply worried by the Commission plans to tie digital identity with the technical implementation of age verification.” He urged Brussels to “rethink their plans for age verification and instead focus on overdue enforcement of the DSA with high penalties proportionate to the harm caused by Big Tech.”

Lessons learned

The EU is moving as age restrictions go mainstream globally. According to the Organization for Economic Co-operation and Development, the number of countries considering such measures rose from one at the end of 2023 to 25 by April 2026. Australia, Brazil, and Indonesia already have laws in force.

“What started with Australia’s social media ban quickly spread,” Simon commented. “The US and Europe are now catching up, and the scope is already expanding beyond social media.” Early evidence from Australia shows, so far, no clear decline in reported harms such as cyberbullying or image-based abuse.

Besides, recent studies indicate that 61% of Australian children aged 12 to 15 still have access to restricted platforms, while 70 percent say it is easy to bypass the ban, using accounts created by older friends or relatives, or manipulating age-estimation tools, or using VPNs. The country’s own regulator has acknowledged that “a substantial proportion” of minors continue online.

Those lessons have not yet reshaped the European debate. Strict verification requirements on mainstream platforms may push users toward less regulated environments, Barata warns.

Simon adds that verification systems assume universal access to smartphones and digital literacy. "People who are less digitally literate, or who do not have access to the right devices, may be excluded entirely," she told Tech Policy Press. For LGBTQ+ youth and children in remote areas in particular, she argues, removing online access can "increase isolation rather than reduce harm."

Such risks are even higher outside the EU, where data protection frameworks are much weaker. “If age verification systems are exported without strong protections, they could create even greater privacy risks globally,” said Simon.

Companies, including Meta, have backed a common EU-wide "Digital Majority Age." For critics, that alignment is telling. "The core issue is not how old a child is, but how social media works," Simon said, pointing to algorithms, profiling and engagement-driven business models. The question, she argues, is whether policymakers are addressing the causes of harm — or simply controlling access to the internet.

Authors

Joana Soares
Joana Soares is a Portuguese freelance journalist, based in Brussels, writing about technology policy, privacy, and digital rights. She reports on AI regulation and the political impact of emerging technologies across Europe, with a focus on how policymaking influences democratic institutions and ci...

Related

Analysis
Tracking Efforts To Restrict Or Ban Teens from Social Media Across the GlobeFebruary 23, 2026
Perspective
Malaysia Is Banning Under-16s From Social Media. But Will It Work?January 20, 2026
News
The Drive For Age Assurance Is Turning App Stores Into Childhood RegulatorsJanuary 12, 2026

Topics