Home

Donate
Perspective

The Political Economy of AI Starts in Brazil, Not Silicon Valley

Adriana Abdenur / May 6, 2026

Aerial view of Octavio Frias de Oliveira Bridge (Ponte Estaiada) over the Pinheiros River at sunset—Sao Paulo, Brazil. Shutterstock

When Brazil’s Central Bank launched PIX, it didn’t just introduce a faster payments system; it reshaped the country’s financial infrastructure almost overnight, creating a widely accessible, low-cost public alternative to private platforms. Today, as artificial intelligence begins to play a similarly foundational role in economies, a comparable question is emerging: who builds and governs the infrastructure—and who captures the value it generates?

In Silicon Valley, AI is still largely framed as a race. The discussion focuses on performance and speed of deployment. The conversation tends to center on which models will dominate and who will capture value, with risks mostly defined in technical or existential terms. Outside of California, however, a different conversation is emerging. It is less concerned with who wins and more with how the race itself is structured, and what it means for economies and democratic systems.

In Brazil, where high-stakes national elections will be held in October and democratic institutions are under strain, discussions about AI rarely begin with benchmarks or market share. They begin with political economy. Policymakers, researchers, and civil society actors are asking whether AI will reproduce familiar patterns—where resources, labor, and knowledge are extracted while value is captured elsewhere—or whether it could support more balanced forms of development. This means conversations about AI safety risks don't necessarily start with catastrophic global threats—they begin with the more immediate challenges the sector poses to everyday people and communities.

This perspective is also reflected in Brazil’s ongoing legislative debate around AI, particularly the proposed framework under PL 2338/2023, known as the Artificial Intelligence Legal Bill. Modeled in part on the European Union’s risk-based approach, the bill seeks to establish rules for high-risk applications, data governance, and accountability. Yet the debate around it has extended beyond technical regulation to address broader questions about sovereignty and market structure. For many actors, the bill is not only about managing risks, but about whether Brazil can shape the terms under which AI is deployed, rather than simply adapting to models developed elsewhere.

These concerns are grounded in lived realities. Brazilian users generate vast amounts of data that feed global AI systems, yet decisions about how that data is used—and who profits—are largely made abroad, especially in the United States and China. The AI value chain relies on low-paid workers—often in countries like Brazil—to perform content moderation and data labelling. This work frequently entails prolonged exposure to extremely violent and disturbing material, while higher-value activities remain concentrated in a small number of countries. At the same time, core infrastructure—from cloud computing to advanced chips—is controlled by a small number of foreign firms, leaving countries like Brazil dependent on external providers for critical digital capacity.

Seen from this vantage point, AI is not just a technological breakthrough. It is reshaping the global economy in uneven ways and continuing pre-existing patterns of uneven distribution between the core and the periphery of the global economy. It raises questions about who controls infrastructure and who captures value, as well as how risks and benefits are distributed.

Right now, the political economy of AI is reinforcing existing concentrations of power. A small number of corporations control the key inputs—data, compute, capital, along with specialized talent—and extend their reach across markets. This concentration is not only economic; it is deeply political. Control over AI systems increasingly translates into influence over information flows, payment systems, public infrastructure and public debate, and over time this affects how democratic institutions function.

Yet these dynamics are often sidelined in mainstream discussions. When risks are addressed, the focus tends to be on specific applications or downstream harms rather than on how these systems are built and who they serve. Generative AI is often framed as a tool for productivity and creativity. But it relies on vast bodies of global knowledge that are scraped, recombined, and monetized in ways that remain opaque, raising unresolved questions about ownership, consent, and compensation.

The infrastructure behind AI reveals a similar pattern. From mineral extraction to energy-intensive data centers, supply chains are structured in ways that concentrate value while shifting environmental and social costs elsewhere. This model is often justified by a familiar promise: that investment in AI infrastructure will drive economic growth. However, this narrative—rooted in a broader techno-optimism—obscures how uneven that growth tends to be, particularly when it comes to large-scale, capital-intensive investments like data centers. For Brazil, this can mean providing key inputs—energy, minerals, labor and data—while much of the economic return accrues abroad. Many of these resources are located in fragile ecosystems, particularly in the Amazon, where the environmental and social costs are borne locally even as the benefits are captured elsewhere, with consequences that extend far beyond national borders.

These are not isolated dynamics. They reflect broader pressures shaping the global political economy of AI, where inequality is increasingly entangled with geopolitical competition, especially between the United States and China. Developing countries are increasingly being engaged in adopting digital public infrastructure (DPI) models promoted as sovereign alternatives to dominant platforms, even as these systems embed specific governance logics and are scaled through international partnerships that may shape institutional trajectories. While such initiatives can expand capacity, they may also reinforce reliance on external standards and providers.. At the same time, ongoing discussions at the World Trade Organization highlight how rules around data flows, digital trade, and AI governance are being shaped in ways that could further constrain policy space for developing economies. As AI becomes central to economic and political power, countries like Brazil face the prospect of greater technological dependence at precisely the moment when autonomy matters most.

Why should Silicon Valley care? Because these dynamics are starting to reshape the environment in which AI operates. What has often been treated as an externality is becoming a constraint, one that affects market access and the ability to scale globally. While many technology firms are likely to resist these shifts, governments are asserting more control over data, markets, and the entire infrastructure that underpins them, pushing back against models that concentrate value abroad. Across Latin America and beyond, there is growing momentum to more actively regulate digital markets and strengthen competition frameworks, alongside efforts to build local technological capacity—developments that companies may seek to contest, but will increasingly have to navigate.

The democratic implications are already visible. When a small number of private actors control the systems through which information is produced and distributed, they play an outsized role in shaping public debate without equivalent accountability. These concerns extend beyond content to the underlying digital infrastructure itself: as payment systems and public digital services are increasingly built on external platforms, new forms of dependence emerge with implications for governance and autonomy. Closed algorithmic systems influence what people see and believe, while generative AI introduces new risks of large-scale manipulation. In Brazil’s current electoral context, these dynamics are already affecting how information circulates, amplifying polarization and straining institutions.

There is still a gap between the scale of these challenges and how they are being addressed. In many tech-influenced policy spaces, including philanthropy, AI is framed through an “AI for good” lens, focused on improving applications and technical fixes. These efforts can deliver real benefits, but they leave underlying power structures intact. Without addressing ownership, control, and distribution, they risk reinforcing the very dynamics they aim to mitigate.

What is needed is not less innovation, but a broader understanding of what is at stake. AI is not just a set of tools—it is becoming a core layer of economic organization. Decisions about infrastructure, governance, and investment will determine how value is created and shared.

From Brazil’s perspective, this is a window of opportunity not just to adapt, but to lead. The systems are still being built, and the rules are not yet fixed. Rather than choosing between unrealistic technological arms races or full dependence on external models, Brazil has the chance to help define a more grounded approach: one that reflects the priorities and constraints of the global majority, and that places questions of distribution, sustainability, and governance at the center.

Silicon Valley remains central, but it no longer operates alone. Governments, public institutions, and civil society are asserting a role in setting the terms of this transformation. For companies, this means working within these emerging frameworks rather than around them—engaging with regulation, supporting clearer rules, and investing in partnerships that build local capacity and boosting local innovation instead of deepening dependence—and moving beyond a narrow focus on end users to consider how their systems shape entire economies.

AI will be shaped less by what it can do than by who controls it. Brazil’s experience with PIX is an intriguing window into how public intervention can reshape AI-relevant markets. Whether that can extend to AI more broadly will depend not only on domestic choices, but on the global structures and frameworks within which those choices are made.

Authors

Adriana Abdenur
Adriana Abdenur is Co-President of the Global Fund for a New Economy.

Related

Analysis
How Brazil's AI Governance Vision Got Sidelined at the India SummitFebruary 27, 2026
Perspective
No Digital Public Infrastructure Without RedressFebruary 26, 2026

Topics