Europe Is Looking To Water Down AI Protections. It Should Reinforce Them.
Laura Lazaro Cabrera , Magdalena Maier / Mar 18, 2026
The headquarters of the European Commission in Brussels. Justin Hendrix/Tech Policy Press
The EU is rethinking the AI Act despite repeated civil society warnings. Through the AI Act Omnibus proposal, the European Commission suggested changes which — far from being technical — would significantly weaken safeguards against AI systems deemed most dangerous to health, safety and fundamental rights. The widely criticized proposal opened the door to more far-reaching changes being suggested by groups in the European Parliament.
As the debate continues and evidence of AI harms grows unabated, none of the proposals have taken steps to strengthen pathways to redress under the AI Act, despite this being a core weakness of the legislation.
Beyond a right to obtain an explanation and the existence of a complaints mechanism, the AI Act offers little by way of tools for individuals to exercise their rights in case of infringement or harm. While these processes are a meaningful and necessary first step to ensure access to information and accountability, they fall short. As scholars have articulated, the right to an explanation can be interpreted as having a narrower scope than intended.
Similarly, the complaints mechanism in the AI Act — while open to any person or entity irrespective of whether they were affected — offers no procedural safeguards for complainants, does not require a response or investigation, and does not benefit from a judicial oversight or scrutiny requirement.
These relative omissions from the AI Act are deliberate, with the Act explicitly noting that EU law already provided effective remedies to individuals adversely affected by the use of AI systems. The Centre for Democracy and Technology Europe’s research reveals that this assumption is misguided.
Strong rights-based frameworks are undermined by practical challenges
The first law to ensure transparency and accountability for AI systems was the General Data Protection Regulation (GDPR), which is now also in the crosshairs of the EU’s deregulatory effort. As a technology-neutral, rights-based law, the GDPR has played a significant role in setting guardrails and actionable mechanisms for individuals whose personal data is processed by AI systems.
A notable contribution of the GDPR to the effective redress landscape is the existence of rights and rights of action for individuals against both entities that process data for failing to respect their obligations under the GDPR, and regulators for failing to enforce the law as intended. In many instances, these mechanisms will be available to individuals whose data is processed by AI systems or models. However, the problem of many hands in AI development obscures the chain of responsibility and accountability. Known AI taxonomies often map uneasily onto responsible entities under the GDPR, leading data regulators to provide guidance on this issue.
Fortunately, data protection authorities have an important role to play in the AI Act. But their relative procedural strengths apply only in connection with GDPR, and are largely inapplicable in the AI Act context by the absence of complainant-friendly procedures and safeguards, which the omnibus fails to rectify.
Other challenges emerge from other applicable frameworks. Equality and non-discrimination law takes as its starting point the imbalance of power between parties, offering key — and so far unique — mitigations such as shifting the burden of proof in case a preliminary case of discrimination can be established by the applicant. This means that the burden to prove that no discrimination has taken place shifts to the defendant, a crucial mechanism in contexts where individuals lack the information or understanding about how an algorithmic decision has come into being.
However, there are several limitations that block the use of equality and non-discrimination law as a pathway to redress algorithmic discrimination. In particular, its focus on individual redress fails to adequately address underlying structural inequalities and power dynamics and burdens individuals with the costs and risks of litigation. The opacity and intransparency of algorithms push the reversal of the burden of proof to its limits.
The AI Act’s documentation and registration obligations can be of added value in this context. It is therefore vital that they are strongly implemented and enforced, not weakened as would be the case under the current omnibus compromise texts, which would enable simplified compliance for a larger number of companies or allow the omission of key information when registering high-risk systems falling under the exemptions.
Cross-cutting frameworks offer generality but struggle to adapt to the realities of AI harms
With an explicit framework for collective redress — the Representative Actions Directive — EU consumer protection law carries the potential to overcome many of the problems posed by individual redress. The legal framework remains, however, under the constant challenge to cater to the complex digital environment characterized by entrenched power asymmetries and intransparency while upholding its principles-based, technology-neutral nature.
The added value of the Representative Actions Directive is also not straightforward, given the overall high costs of such actions for representative entities and the difficulties in getting funding for such cases. The Directive leaves it up to Member States to decide on an approach to reduce the costs for qualified entities. While it is possible to bring representative actions under the AI Act, their effectiveness and usefulness are yet to be seen. Given the aforementioned challenges, not including other possibilities for judicial redress under the AI Act seems like a missed opportunity.
The AI Act’s limited approach towards effective redress was justified by the existence of the proposed AI Liability Directive (AILD), which was supposed to fill this gap. Since the proposal was withdrawn by the European Commission last year, the revised Product Liability Directive offers an alternative pathway to pursue compensation for AI-related harms. While at first sight this Directive caters to specific procedural hurdles that could arise when challenging the output of an opaque and intransparent AI system, a closer look reveals several shortcomings that may leave individuals without understandable and meaningful information necessary to challenge an AI output.
For example, the Directive empowers courts to request relevant evidence from the defendant if the claimant presented facts and evidence sufficient to support the plausibility of the claim. This is a useful means to address the imbalance of knowledge and information between an affected individual and an AI system provider or deployer, but it is only available once court proceedings have been initiated, potentially leaving individuals hesitant to start proceedings or open to settling a complaint out-of-court without essential information.
This flaw was addressed by the AILD, which foresaw the disclosure of evidence for potential claimants. Compensation for non-material harms is not compulsory and is left to the discretion of member states, despite the serious impact AI can have on privacy or mental health. Short of a pathway to effective redress under an AILD, it is therefore vital that the AI Act’s chapter on effective remedies is strengthened.
The AI Act should reinforce protections, not dilute them
In light of these challenges — many of which are present throughout the different frameworks — actions should be taken both to fill the substantive gaps left by the AI Act and bring existing legislation up to speed in terms of strengthening enforcement and redress procedures to cater for AI-specific hurdles. Amongst others, it is necessary to strengthen collective action mechanisms, adopt and implement procedural safeguards such as a reversal of the burden of proof in cases involving AI and meaningful transparency provisions. We also need to ensure that enforcement authorities are sufficiently staffed and resourced, especially in light of new powers and mandates they receive under legislation such as the AI Act.
Momentum is also building to carve out a range of AI applications from the scope of the AI Act, ranging from internal business applications to products governed by sectoral legislation. Many of these proposed changes raise grave concerns, not least because they result in some of these AI products operating in a gray zone where responsibilities are few and individual protections non-existent.
Lawmakers and government representatives should ensure that they center on improving the AI Act’s weaknesses instead of compounding them. Concretely, they should ensure the prompt applicability of safeguards for high-risk AI systems, strengthen safeguards for processing sensitive data, and close loopholes that allow dangerous AI systems to remain unscrutinized.
As the omnibus proposal moves swiftly towards inter-institutional negotiations in the form of trilogues, decision-makers must not lose sight of the AI Act’s goals and effectiveness. We should not wait for a scandal to materialize in order to improve individual protections.
Authors

