Make AI Infrastructure a Public Issue, Not a Private Risk

Make AI infrastructure a public issue, not a private risk. Who controls compute controls capability will decide our security and future - policy, not profit, must power it.

Margaret Lin··Ai

The World Economic Forum says we should treat data centres and AI infrastructure as critical infrastructure. Fine. The claim is obvious and urgent: these systems power banking, communications, emergency services, and the emerging class of AI decision engines. Treating them like nice-to-have cloud conveniences won't cut it. Frankly, the math doesn't lie: who controls compute controls capability.

Calling something “critical infrastructure” isn’t symbolic theater. It pulls real regulatory levers: resilience standards, incident reporting, prioritized recovery, and often some mix of public funding or liability protection. For operators running large data centres and AI stacks, that means new costs, constraints, and oversight. It also means governments gain a louder seat at the table — inspections, mandated changes, and, in extreme cases, direct operational direction. The WEF’s basic point is sensible: the digital backbone deserves the same attention we give bridges and power grids.

But a label doesn’t magically harden networks.

Without aligned standards and actual enforcement, “critical” is just a new heading on the same compliance forms. Data centre capacity and AI infrastructure are geographically lopsided, clustered in a handful of countries and networks. That concentration is a design flaw, not a paperwork problem. You can require backups; you can’t conjure new energy grids and diverse fiber routes on command.

Underneath the WEF framing sit two problems no one is really selling you.

First, geopolitical fragmentation. Data centres and AI supply chains cross borders; regulations don’t. Chips, build tools, and power deals are stitched together from different jurisdictions. If Country A treats AI infrastructure as critical while Country B treats it as ordinary commercial kit, you invite regulatory arbitrage. Workloads will gravitate to wherever scrutiny is softest and liability is lowest. So, yes, treat this as critical infrastructure — but without coordination, the strongest rules will mostly push traffic elsewhere.

Second, the economics. Hardening infrastructure for resilience and serious security costs real money. Operators will push that through to customers. Governments can subsidize, but that usually tilts the field — favoring incumbents, locking in dominant platforms, and making it harder for smaller players to compete. Right now, incentives reward scale, not scatter. That’s why you see giant, centralized farms instead of widely distributed, redundant sites. The WEF is right about the risk profile; it’s quieter about the bill.

There’s also the practical problem of what actually counts as “critical” inside this ecosystem.

Not every GPU cluster needs the same regulatory straightjacket. Criticality is about workload, not hardware. Emergency communications, hospital systems, and financial settlement engines justify the highest bar. Experimental model training that can pause for a week without real-world fallout probably doesn’t. If everything is critical, nothing is prioritized when it breaks.

A more useful playbook starts with risk-based tiers: classify facilities and workloads by the systemic damage they can cause if they fail. Then tie obligations — redundancy, incident reporting timelines, recovery plans — to those tiers. That lets governments aim the heaviest requirements where they matter, instead of smothering the entire AI stack in blanket rules.

On incentives, the default market outcome will remain centralization unless someone nudges differently. Targeted subsidies can support geographic diversification of capacity, or help fund additional fiber paths that reduce single points of failure. Conditional liability protections for operators that meet baseline standards can shift the conversation from “avoid regulation” to “earn certainty,” without turning the sector into a regulated utility.

Cybersecurity deserves its own chair, not a footnote.

Designation should force incident transparency and enforceable reporting windows. Quiet breaches are how local failures turn into national outages. Threat-sharing frameworks between operators and national response teams are less about surveillance and more about triage: getting early warning to the people who can contain damage before it goes systemic.

Of course, giving governments more power over infrastructure carries its own risks. Surveillance creep is not a theoretical concern. Emergency powers expand, then linger. The WEF’s framing underestimates how quickly “protect the backbone” can become “peek into the backbone,” especially where legal safeguards are weak. And enforcement capacity is wildly uneven. Declaring something “critical” without funding regulators, technical staff, and independent audits is just prestige language.

Critics worry this whole push will smother innovation, jack up costs, and turn AI into another over-regulated behemoth. Let’s be real: regulatory drag is real, and plenty of governments are bad at drawing lines between safety and control. But pretending AI infrastructure is just another SaaS product is wishful thinking. A well-placed outage or targeted attack on a few key data centres can disrupt markets, healthcare, and communications in one shot. You’re either paying for resilience up front, or you’re paying for correlated failure later.

From my time watching trading systems get poked, prodded, and occasionally broken, one lesson stuck: resilience is an operational habit, not a slogan. Plans look great in binders until a real incident hits and half the assumptions fail. Critical-infrastructure status can force those plans to exist and get tested. It cannot force anyone to learn from the test.

So treat the WEF’s call as the easy part. The hard part is coming: the first time a government orders an AI operator to reroute, shut down, or prioritize workloads in the name of “critical infrastructure,” everyone will finally understand what this label really means.

Edited and analyzed by the Nextcanvasses Editorial Team | Source: The World Economic Forum

Disclaimer: The content on this page represents editorial opinion and analysis only. It is not intended as financial, investment, legal, or professional advice. Readers should conduct their own research and consult qualified professionals before making any decisions.

Make AI Infrastructure a Public Issue, Not a Private Risk | Nextcanvasses | Nextcanvasses