Home

Donate
Perspective

How AI Reverses the Political Logic of the Internet

Konstantinos Komaitis / Apr 22, 2026

Isaiah Berlin’s 1958 lecture-essay, “Two concepts of liberty,” distinguishes between the absence of external interference (negative liberty) and the capacity for autonomous action and self-mastery (positive liberty). The early Internet, at least in its architecture, embodied both: its decentralized architecture resisted centralized control while empowering users to create, coordinate, and communicate on their own terms.

Eventually, this dual liberty eroded. The rise of platform-based “walled gardens” reoriented the Internet away from distributed agency toward managed environments in which algorithmic curation governs visibility and engagement metrics shape expression. Yet, the Internet’s foundation remains stubbornly open. That is because core protocols like TCP/IP and HTTP still persist and act as a structural escape hatch. Digital freedom is not lost to design; it is still a matter of choice, policy, and practice.

The tension between freedom from constraint and freedom through empowerment offers a useful lens for assessing the trajectory of artificial intelligence. Unlike the Internet, AI’s most powerful systems are built on architectures that are inherently closed, centralized, and optimized for top-down control, leaving little room for spontaneous, bottom-up reclamations of agency. Deployed at scale through consolidated platforms, AI reverses the Internet’s early promise: rather than minimizing interference, it maximizes prediction and behavioral optimization, subtly reshaping the conditions of freedom itself. Control here is not coercive but a frictionless, personalized environment that trades agency for convenience.

This logic erodes both of Berlin’s liberties. Negative liberty is diminished through invisible censorship, as algorithmic ranking preemptively filters out non-optimized or dissenting information. More dangerously, positive liberty is undermined as individuals shift from authors of their own intentions to predictable outputs of a system’s objective function. When AI continuously calibrates what we see, choose, and even desire, self-mastery is quietly ceded to the machine.

This condition is what Hannah Arendt warned against: the erosion of political freedom not through spectacle, but through routinization and through systems that reduce human action to function. AI excels at this reduction, replacing the messy, participatory work of human deliberation with streamlined prediction and comfortable convenience. When unscripted political agency is compressed into a statistical profile optimized for engagement or profit, the dignity of self-determination is lost. The threat is not a robot uprising, but a world in which decisions are seamlessly made for us, rather than by us. AI governance, then, becomes ultimately a question of democracy itself.

From democratic infrastructure to administrative infrastructure

The political character of a technology is not accidental; it is encoded in its architecture. The Internet’s original infrastructure embodied a set of properties that were structurally aligned with democratic governance. These properties were not guarantees of democracy, but they created favorable conditions for it.

The early Internet was decentralized, permission-less, end-to-end, and interoperable. No single actor controlled entry, identity, or publication while intelligence resided at the edges of the network and not at its center. Users could create, speak, organize, fork, and exit, while power was dispersed, contestable, and, critically, reversible. These architectural features mirrored some core democratic principles of pluralism, accountability, subsidiarity, and the constant possibility of dissent. In the early age of the Internet, democracy was not upheld because of good intentions, but because the infrastructure itself limited domination. Even when powerful actors emerged, they operated atop protocols they did not own. TCP/IP did not care who you were; HTTP did not rank speech by loyalty or utility. The architecture assumed equal standing, even when society did not.

AI represents a radical inversion of these properties. AI systems, especially large-scale, foundation and generative models, are centralized, opaque, capital-intensive, and asymmetric by design. They depend on massive proprietary datasets, specialized compute, and expert control that is inaccessible to the public. Intelligence is not distributed to the edges but concentrated at the core. Participation is not generative but consumptive. The user does not meaningfully act within the system; the system acts upon the user. To this end, where the Internet was permission-less, AI is credentialed; where the Internet was interoperable, AI is enclosed; where the Internet enabled exit, AI deepens dependency; and, where the Internet assumed unpredictability, AI is built to eliminate it. These differences are not neutral, and they mark a shift from democratic infrastructure to administrative infrastructure.

Democratic systems depend on participation, contestation, and the continual renegotiation of power. Their legitimacy arises from processes where deliberation, accountability, and the ability to challenge outcomes is allowed. The early Internet, by distributing intelligence and preserving exit, made these processes technically possible at scale. AI systems, by contrast, are optimized for administration rather than participation. They do not prioritize consent; they infer. They do not deliberate; they optimize. Decisions are rendered as outputs of objective functions, not as outcomes of collective judgment. In this model, dissent is not illegitimate; it is irrelevant. The system does not suppress disagreement; it simply routes around it.

Where democracy depends on citizens who can speak and act, AI requires users who can be predicted and managed. The result is not necessarily tyranny, but a form of governance that is structurally indifferent to democratic agency.

If the internet failed democracy, AI will not even try

The Internet’s open design has not been an inherent shield for democracy. While its networks and protocols were built to be accessible and interoperable, they were eventually captured by monopolies. Social media and other architectures built on the Internet began to amplify polarization and outrage, while algorithmic ranking systems, layered atop an otherwise open infrastructure, created new challenges to collective sense-making. From the United States to the Philippines, the lesson is stark: an open communications infrastructure does not guarantee a protected democracy; it can just as easily be exploited.

Yet there is a crucial distinction. These failures occurred despite the Internet’s democratic properties, not because of their absence. Civil society could still organize, whistleblowers could still publish and alternatives could still be built. Resistance remained technically possible. AI threatens to remove even this residual hope. Given this context, therefore, if a decentralized, permission-less, generative infrastructure could not ultimately withstand sustained political and economic capture, then a centralized, predictive, behavior-shaping system has virtually no chance of doing so. The probability that AI will evolve to, on balance, enable democracy rather than threaten it is not low, it is effectively zero—not only because of its centralization, but because its integration into everyday systems makes disengagement increasingly impractical. Worse still, AI threatens not to merely suppress democratic action, but to neutralize the democratic subject.

Democracy depends on citizens who can think, doubt, disagree, organize, and act unpredictably. AI systems, optimized for frictionless experience and behavioral certainty, act directly on these capacities. By outsourcing judgment, language, memory, and choice itself, AI risks transforming politically active individuals into what can only be described as functional zombies: cognitively assisted, emotionally soothed, permanently guided, and increasingly incapable of sustained dissent. The public itself is slowly disarmed, not by violence, but by optimization. The political danger of AI, therefore, is not that it will openly oppose democracy, but that it will make democracy psychologically, cognitively, and socially obsolete.

From a network of agency to a system of substitution

The political significance of technology lies not just in what it does, but in how it organizes human activity. The early Internet, for all its imperfections, was underpinned by an architecture that maximized possibility, and its underlying ethos allowed anyone to publish, build, or connect without seeking permission from a central authority. Legal scholar Yochai Benkler frames this moment as a fundamental shift from industrial information production to a “networked public sphere,” transforming users from passive consumers into active participants and creators of their own digital commons. Artificial intelligence represents a markedly different paradigm. While not all AI systems diminish agency – there exists a limited class of tools, including open-source models, locally deployable systems, and configurable interfaces such as GPT environments with editable prompts or code execution, that can in principle enhance user control through inspection, modification, and intentional use – these remain exceptions rather than the norm. Contemporary, widely adopted AI systems, particularly large language models, are instead engineered for efficiency, prediction, and scale, priorities that align closely with corporate incentives but sit uneasily with democratic values.

We see this substitution across every domain. Recommendation engines act as powerful, invisible editors of our reality, determining what narratives we consume and which voices we hear. Generative models produce text, images, and code for us, bypassing and often flattening the iterative, uncertain, and sometimes uncomfortable processes that creativity and genuine intellectual struggle require. This shift is particularly visible in tools like GitHub Copilot and large language models such as ChatGPT, which do not simply assist writing but anticipate and complete it. Users increasingly select from machine-generated suggestions rather than composing thoughts independently. Over time, this redefines authorship itself as expression becomes a process of curating outputs rather than generating them. The individual remains involved, but the space of possible articulation is pre-structured by the system. Even more crucially, decision-support tools increasingly shape critical, rights-implicating domains like hiring, lending, policing, and welfare allocation, effectively substituting human deliberation and judgment with algorithmic certainty. As the philosopher Michael Sandel has forcefully argued, when efficiency becomes the primary moral metric, we cease asking whether systems are just, and ask only whether they work.

The result is a quiet and profound reconfiguration of power. Where the Internet’s architecture distributed agency horizontally among millions of users, AI’s reliance on massive proprietary datasets and immense computational power recentralizes it vertically. This power is consolidated into the hands of a small number of entities, predominately those who design, train, and deploy the foundational models.

This trend towards centralization is amplified because AI is not emerging in a vacuum; it is being layered onto an already consolidated Internet ecosystem. The handful of companies—think of Google, Microsoft, Meta, and Amazon—are not merely AI developers; they are the infrastructure providers, the data gatekeepers and the interface designers for the vast majority of global digital interaction. They control the cloud computing backbone, the search function, the app ecosystems, and the dominant social platforms.

This control matters immensely because AI is not merely a tool; it is an experience layer. Whoever controls AI controls how questions are framed, which answers are surfaced, which trade-offs are hidden, and which values are embedded into the system’s defaults. As Langdon Winner famously observed, technologies have politics, not in an explicit partisan sense, but because they inherently organize human activity in particular ways, empowering some actions while constraining others.

Take TikTok, for instance, where the “For You” feed eliminates the need for active selection altogether. Content is not chosen but continuously delivered, optimized through real-time behavioral feedback. Users do not navigate a space of information but they are immersed in a stream curated on their behalf. While engagement increases, intentionality declines. The act of choosing is replaced by the experience of being served.

If AI anticipates our needs before we articulate them, suggests our words before we choose them, and filters our options before we see them, the vital space for conscious deliberation and independent thought shrinks. The risk is not manipulation in the crude, obvious sense, but infantilization. John Stuart Mill, in advocating for personal freedom, argued that liberty is not simply about achieving preferred outcomes, but about the full development of human faculties. A society that removes the need for judgment, for intellectual struggle, for disagreement, and for critical self-correction, even if it is done in the name of convenience, undermines the very conditions necessary for genuine human flourishing and democratic vitality. The shift from a network of agency to a system of substitution is a profound political event: a quiet recolonization of the digital sphere where the human capacity for self-determination is traded for algorithmic ease.

Human rights as a counterweight to optimization

The shift from a networked public sphere that fostered agency to one increasingly defined by algorithmic substitution demands a response grounded in the only framework capable of resisting the instrumentalization of human life: human rights. Often criticized as slow, abstract, or technologically outpaced, human rights law derives its enduring strength from precisely what optimization regimes reject, specifically the refusal to treat human beings as variables to be managed. Dignity, autonomy, equality, and participation function not as design preferences, but as non-negotiable constraints on systems that increasingly mediate social, economic, and political life.

A human-rights-based approach to AI governance forces a decisive pivot from technical performance to ethical accountability. It reframes the governing questions and focuses on not merely whether a system is accurate or efficient, but who bears responsibility when it fails, what mechanisms of redress exist, and whose rights are disproportionately burdened. Most fundamentally, it asks whether a system preserves the individual’s capacity for meaningful choice or whether it merely refines the prediction and steering of behavior.

This interrogation provides a necessary defense against what Jürgen Habermas described as the “colonization of the lifeworld.” Habermas warned that domains of shared meaning, ethical reasoning, democratic deliberation and cultural formation must be protected from instrumental rationality, namely the logic of efficiency and control characteristic of bureaucratic and economic systems. AI, left unchecked, increasingly operates within precisely these domains, shaping education through adaptive learning systems, culture through recommendation engines, public discourse through ranking and moderation, and even intimacy through algorithmic mediation. Instrumental reason, amplified by AI, risks displacing the very conditions under which collective meaning and freedom are produced.

While consolidation seems inevitable, new, though still vulnerable, movements are surfacing to put power back into human hands. Projects involving open-source language models developed by academic or non-profit consortia, such as those prioritizing transparency and community auditing, offer a glimpse of generative AI decoupled from corporate black boxes. Cooperative data trusts or worker-owned AI platforms experiment with new ownership and governance models, ensuring that the economic and informational value generated by data flows back to the people who create it. Civic AI initiatives allow communities to collectively govern how their local data is used for public benefit, essentially asserting local sovereignty over algorithmic deployment.

For such alternatives to become more than symbolic, they require supportive governance frameworks anchored in human rights. Interoperability mandates, public funding for non-profit AI research, enforceable data access rules, and meaningful transparency obligations are not ancillary reforms but structural prerequisites for pluralism. Without legal mechanisms that demand accountability and preserve autonomy, bottom-up models will continue to be overwhelmed by the scale and power of consolidated systems.

The dual threat posed by AI—the “colonization of the lifeworld” and the recentralization of power—renders purely top-down or self-regulatory governance models insufficient. AI increasingly functions as a form of shared social infrastructure, drawing value from collective data while shaping public goods such as health, education, labor, and democratic discourse. Its legitimacy cannot be secured through expert control alone.

Here the work of Nobel laureate Elinor Ostrom offers a critical insight. Ostrom’s extensive research on governing common pool resources, such as fisheries, forests, or irrigation systems, demonstrated that durable, equitable systems rarely emerge from centralized authority. Instead, they depend on governance arrangements that meaningfully involve those affected by the rules in shaping, monitoring, and revising them. AI increasingly resembles such a resource: collectively produced, socially consequential, and vulnerable to enclosure if governed exclusively by corporate or bureaucratic actors. Collaborative AI governance would embed participation into oversight structures. It would shift the focus from how to optimize systems to whose values they should embody and under what conditions they may operate. This is a type of institutional realism where legitimacy cannot be engineered through transparency alone but must be built through mechanisms that allow for input, contestation, and correction.

Such mechanisms must be structural rather than symbolic. Participatory impact assessments should accompany deployments that affect fundamental rights. Transparency obligations should focus on governance choices rather than illusory demands for total code disclosure. Formal accountability bodies should grant users and workers the standing to question and contest automated decisions, ensuring that systems remain answerable to human judgment.

If AI is to avoid deepening the erosion of human agency, its trajectory must be consciously redirected rather than passively accepted. Here, the early Internet offers a set of concrete, if imperfect, lessons. Its most enduring contribution was not that it guaranteed democratic outcomes, but that it embedded the possibility of agency into its architecture. Decentralization, interoperability, and permission-less innovation ensured that power, while uneven, remained contestable and reversible.

Translating these principles into the AI context suggests a different design and governance agenda. First, interoperability must be reintroduced, allowing users to move between systems, combine models, and avoid lock-in to singular platforms. Second, meaningful transparency must extend beyond model outputs to include the assumptions, objectives, and constraints that shape them. Third, users must be afforded genuine control: the ability to modify, audit, and meaningfully influence how AI systems operate in their lives. Finally, public and cooperative alternatives, supported through policy, funding, and institutional design, must exist alongside corporate systems to ensure pluralism rather than dependence.

Crucially, these interventions are not about eliminating AI’s capabilities, but about rebalancing the relationship between system and subject. The lesson of the Internet is that agency does not emerge spontaneously from technology but that it must be structurally enabled and politically defended. Without such intervention, AI will not simply concentrate power, but normalize its concentration as the natural and inevitable order of things.

The stakes are ultimately democratic. Without participatory governance, AI risks enabling the form of “soft despotism” Alexis de Tocqueville warned against: a condition in which citizens are not coerced, but gently managed, relieved of the burden of judgment, habituated to convenience, and detached from collective responsibility. Systems optimized solely for efficiency and profit are uniquely equipped to produce this outcome, offering personalization in place of agency and comfort in place of participation.

The future of AI, like the early Internet before it, will be shaped by institutional choices about ownership, governance, and accountability. Human rights provide the normative foundation; collaborative governance supplies the institutional mechanism. Together, they translate abstract principles into enforceable constraints on power. AI governance, at its core, should not be about controlling machines but about preserving the conditions under which humans remain capable of acting, dissenting, and participating in the shared construction of their social and political worlds, conditions that depend not only on limiting power, but on ensuring that individuals can meaningfully exit, contest, and think beyond the systems that increasingly structure their reality.

Authors

Konstantinos Komaitis
Konstantinos Komaitis is a veteran of developing and analyzing Internet policy to ensure an open and global Internet. Konstantinos spent almost ten years in active policy development and strategy as a Senior Director at the Internet society. Before that, he spent 7 years as a senior lecturer at the ...

Related

Perspective
America’s AI Governance Crisis Is a Democracy CrisisMarch 24, 2026
Podcast
Considering How AI Destroys Democratic InstitutionsMarch 22, 2026
Transcript
How AI Can Support Democracy MovementsApril 24, 2025
Perspective
To Have Democracy, We Must Contest DataOctober 14, 2025

Topics