True AI sovereignty requires global cooperation, not fences
States are buying the language of “AI sovereignty” while leasing their lifeblood from foreign chips, foreign cloud providers, and global research networks. You can declare autonomy from a podium. You can't conjure fabs in a month.
The Brookings piece, “Is AI sovereignty possible? Balancing autonomy and interdependence,” asks the right question: can nations balance autonomy with interdependence? It frames the dilemma as a tug-of-war between control at home and reliance on a global AI ecosystem.
That’s a useful frame.
But it understates how lopsided that tug has already become.
Sovereignty: show-and-tell or strategic posture?
Talk of sovereignty often reads like a policy brochure. Nations promise control over data, models, and standards. They draft white papers, convene task forces, and tout national strategies.
Convenient, isn't it?
Here’s the catch. Hardware—GPUs, specialized chips—comes from a handful of companies and fabrication plants clustered in specific countries. Big-model development runs on hyperscale cloud platforms dominated by a few firms. Talent moves across borders. Research is shared in preprints and open source. You can't wall off those flows without crippling domestic capacity or courting retaliation.
That doesn't make sovereignty a vanity project. At its best, it’s a strategic posture: protect critical supply chains, localize specific data governance, and build redundancy where national security actually demands it. Brookings is right to insist that sovereignty is a balancing act, not a binary switch. The live question is how much autonomy is feasible without strangling innovation or triggering economic blowback.
But listen closely to the public rhetoric and you hear something else: the fantasy of clean separation. As if you can redraw a border and suddenly the cloud respects it.
Follow the money?
Who benefits from a hard stop on cross-border AI? Not citizens.
Often it's incumbent firms or governments looking for tighter control. Regulatory divergence can create protected markets for compliance services and give domestic champions a cushioned runway. That’s not a conspiracy — it's incentive architecture.
Follow the money.
We’ve seen this movie before. Telecom “national champions” justified by security concerns, then quietly locked in cozy contracts and sluggish upgrades. Cloud providers sold as strategic assets, only to become single points of political and technical failure. Each time, sovereignty talk softened the ground for corporate dependence, not independence.
AI is heading down the same path unless someone asks the boring, unglamorous question: who writes the procurement rules, who wins the tenders, and who gets frozen out?
Broken levers and real levers
The Brookings article sketches familiar governance options: domestic controls, export restrictions, multilateral norms. Necessary levers, but blunt ones.
Export controls can slow adversaries — and simultaneously slow allies, startups, and hospitals that rely on imported chips. National data localization can give regulators oversight, while adding latency and cost that shrink market opportunities for local firms. A government can boast about keeping data “within borders” while its own agencies struggle to afford compliant infrastructure.
A smarter playbook mixes selective sovereignty with interoperable guardrails. Invest in chokepoints that matter: manufacturing capacity for critical semiconductors, secure data enclaves for sensitive datasets, public compute for research that needs transparency instead of trade secrecy. At the same time, negotiate interoperability rules for model audits, licensing, and cross-border incident response.
You can't have everything. So pick the levers that secure democratic norms and economic resilience instead of theatrical self-sufficiency.
Here’s what they won't tell you: every dollar sunk into symbolic infrastructure is a dollar not spent on the unglamorous plumbing of oversight — auditors, inspectors, technical capacity inside the state.
The corporate blind spot
Brookings nods at interdependence, but doesn’t dig deeply into who actually sits at the center of this web: the companies that design chips, host models, and sell cloud compute. They sit at the junction of state ambitions and market incentives.
These firms can comply with a patchwork of national rules — or shape those rules through lobbying, standards bodies, and contract terms. The same vendor that warns about AI risks in a parliamentary hearing may draft the “best practices” that conveniently align with its own architecture.
If governments insist on territorial control without addressing market concentration, sovereignty becomes a pretext for corporate rent-seeking rather than public governance.
Look at how quickly “trusted cloud” labels morphed into sales pitches. Or how “sovereign AI” offerings now bundle domestic branding with foreign-owned hardware and proprietary stacks. The logo on the data center changes. The dependency doesn’t.
Security, nationalism, and the tempting overreach
There is a serious counter-argument: security imperatives demand maximal sovereignty for defense, espionage resilience, and protection of personal data. Some capabilities should be domestic; you don't outsource nuclear safety, and analogies to high-risk AI systems aren’t absurd.
But totalizing sovereignty is a blunt instrument. It invites fragmentation and raises costs for public services and research. When every country vows to build its own full-stack AI ecosystem, you don’t get a flowering of innovation — you get duplicated effort, shallow talent pools, and excuses for monopolies to entrench behind national flags.
A targeted sovereignty — protecting clearly defined national interests while committing to cross-border norms for less-sensitive systems — is more resilient. You secure the core without choking the wider research ecosystem that actually drives progress.
We should also be honest about the geopolitical reality: the U.S., Europe, and China will keep building different mixes of regulation and industrial policy. Alliances will matter as much as autarky. Building labs and fabs at home helps. So do pacts for shared standards, trusted compute corridors, and joint responses to misuse.
Sovereignty will be negotiated, not declared. The countries that understand that — and still bother to follow the money — will discover their AI “independence” looks a lot like managed dependence, only this time on terms they helped write.