Why AI Sovereignty Depends on Interoperability Standards
Eileen Donahoe, Konstantinos Komaitis / Feb 17, 2026
Silicon Landscapes by Sinem Görücü / Better Images of AI / Creative Commons
As leaders and stakeholders from across the world convene at India’s AI Impact Summit, ‘AI sovereignty’ has emerged as a shared concern across advanced and emerging economies alike. The debate is no longer about whether states should retain control over the artificial intelligence systems that shape their societies, but about how that control can be exercised in a deeply interconnected technological ecosystem.
As AI becomes embedded in public services, critical infrastructure, and security systems, the protocols that govern how systems connect, operate, and are overseen increasingly determine where power lies. These standards decide who bears risk, who can intervene when systems fail, and who can exit when values or priorities change.
In the AI era, sovereignty is exercised less through territorial control than through infrastructure design. Political authority now runs through the AI stack: compute, data, models, interfaces, orchestration layers, and APIs. When these layers are tightly coupled to proprietary platforms, sovereignty is quietly hollowed out. When they are open and interoperable, it is preserved through choice.
This distinction matters most for middle-power countries—states with advanced public sectors and regulatory ambition, but without the scale to dominate global AI markets. For them, sovereignty does not depend on replicating frontier model development. It depends on ensuring that AI systems can be integrated, governed, audited, and, if necessary, replaced on national terms.
Without interoperable standards, governments import pre-configured intelligence: models trained elsewhere that reflect foreign assumptions about acceptable risk, accountability, and social values. Vendor lock-in follows. A public administration that cannot move its health or welfare systems across providers without prohibitive cost is not sovereign; it is dependent.
Standards are where this dependency is created—or avoided. They determine which systems can interact, how decisions are logged and explained, and where responsibility lies when harm occurs. Proprietary systems may promise control, but often entrench power in vendors. Open standards preserve optionality: they allow governments to adapt rules over time, switch providers, and layer domestic priorities onto shared technical foundations.
This is no longer theoretical. As AI systems evolve from static models into agentic systems—capable of invoking tools, accessing databases, and acting autonomously—the interfaces governing those interactions become strategic choke points. Control over agent orchestration increasingly means control over the ecosystem. Encouragingly, open and interoperable protocols for model–tool interaction and agent context management are beginning to consolidate under neutral governance structures rather than single-vendor control.
The lesson is familiar. The internet thrived because its core architecture was open and interoperable. Minimal shared protocols enabled diversity, competition, and governance at the edges. Where sovereignty weakened, it was due to political disengagement—not openness itself.
What governments should do now
For governments seeking AI sovereignty without isolation, the priority is not ownership of every layer of the AI stack, but control over how those layers interact.
- First, governments should treat AI standards as a strategic concern, not a technical footnote. Participation in international and regional standards bodies—particularly those shaping interfaces, auditability, documentation, and agent orchestration—should be coordinated across ministries and aligned with regulatory objectives.
- Second, governments should use public procurement as leverage. Requiring open interfaces, modular architectures, and system portability in public-sector AI contracts directly shapes markets and prevents lock-in in critical services.
- Third, regulatory attention should focus on the layers where sovereignty is most attainable today: integration, oversight, and orchestration. While frontier model training may remain globally concentrated, governments can set enforceable expectations for logging, evaluation, model–tool interaction, and human oversight—preserving discretion as technologies evolve.
- Fourth, interoperability fails if the data used to fine-tune models is inaccessible or locked into a single vendor’s format. Governments should create standardized, secure data-exchange environments. These “refineries” prepare domestic data in model-agnostic formats, ensuring that if a government switches from one provider to another, its data remains portable and usable.
- Fifth, middle-power countries should form interoperability blocs. By aligning technical standards with neighboring or like-minded economies, they can create collective markets large enough to compel global AI providers to comply. Sovereignty in the AI era is a team sport: individual states may be ignored, but coordinated blocs set the global bar.
- Finally, governments should not only regulate standards; they should fund the development of open-source orchestration layers and middleware. By supporting the tools that allow AI models to interact, governments ensure that the connective tissue of the national AI ecosystem remains a public good rather than a proprietary asset.
The real question for policymakers is not openness versus sovereignty, but which kind of sovereignty they seek. One path leads to dependency disguised as control. The other leads to agency: the ability to choose, adapt, and exit. In an interoperable world, standards are power—and governments that ignore them will find sovereignty decided elsewhere.
Authors

