Beyond Gigawatts: AI's real limits are governance and trust
Goldman Sachs says 2026 will be defined by personal agents, mega alliances, and a gigawatt ceiling. Sounds tidy. But tidy narratives hide messy incentives. Follow the money.
Start with the part they get right: energy and hardware constraints will shape what actually runs. The “gigawatt ceiling” is a useful frame because it drags AI out of the abstract and into the grid. Silicon isn’t magic; it hums on electricity. When access to power tightens, the bottleneck on model growth won’t be creativity, it’ll be infrastructure.
But notice how Goldman treats that ceiling like a natural law instead of a political choice.
Energy is not a neutral technical parameter; it’s a geopolitical and regulatory lever. If compute becomes scarce, utilities, grid operators, and national governments will decide which workloads matter. Not all watts are equal. A model that drives revenue for a cloud provider gets one kind of priority; a slower-return public-interest system gets another. Markets will optimize for the highest-paying workloads. That doesn’t democratize access. It reallocates it to whoever can pay the tariff for electricity and the premium for privileged racks. Follow the money.
We’ve seen this before. Think about spectrum auctions and early mobile networks. Access to an invisible public resource turned on who had the balance sheet to lock up long-term licenses. Years later, regulators were still trying to unwind concentration they’d effectively greenlit at the start. Compute and power are heading down a similar path, just faster.
Goldman also puts “mega alliances” at the center of its 2026 story — a polished way of saying: the big get bigger, together.
On the surface, alliances sound benign, even sensible. Pooled resources, shared risk, coordinated research. Who doesn’t want fewer duplicated efforts? But alliances between deep-pocketed firms and the owners of compute are not just about engineering scale; they are a commercial strategy to lock in markets and wall off rivals. When cloud providers, chip makers, and model labs sign long-term, exclusive, or heavily preferential deals, they don’t merely lower costs. They quietly redraw the competitive map.
Who gets the best models, fastest updates, privileged data access? Not the people who actually use the services. Not the scrappy team with a novel architecture and a limited budget. Alliances can become a sorting mechanism: favored partners on one side of the velvet rope, everyone else staring in from the street. The piece treats alliances as an industry inevitability; here’s what they won’t tell you — inevitability is often just incumbency dressed up as destiny.
That matters because regulatory tools trail market practice. If access to compute and frontier models becomes the gatekeeper, competition law will be playing catch-up while contracts, preferred APIs, and bundled services quietly define winners. Smaller builders and alternative architectures risk being priced out or shoved into dependencies they can’t escape. Innovation shifts to those who can buy power, not those with the best ideas. Convenient, isn’t it.
This concentration doesn’t just shape markets; it reshapes governance. Decisions about model limits, data use, or abuse aren’t treated as public safety questions alone; they become bargaining chips in commercial negotiations. Standard-setting slides into private rooms, under non-disclosure agreements, branded as “partnership alignment” instead of rulemaking. Follow the money.
Now to the most seductive part of Goldman’s story: personal agents.
The vision is intoxicating — assistants that live across your devices, manage your calendar, negotiate bookings, summarize your documents, maybe even argue with your insurance company on your behalf. The article is right that demand for something like this is enormous. People already shove half their lives into email inboxes, messaging apps, and cloud drives. Of course they’ll want a layer that makes sense of the mess.
But wanting isn’t the same as getting.
Personalization at that scale depends on deep, persistent data about individuals. That immediately surfaces three hard constraints: privacy, interoperability, and user trust. You don’t get a meaningful “personal” agent if it can’t see your data. And you don’t get a safe one if that data is poured, unfiltered, into centralized servers controlled by a handful of platforms.
Scaling personal agents responsibly means architectures that let models learn from private signals without shipping every sensitive byte to a data center. That implies heavy lifting in on-device modeling, constrained compute, federated learning, or strong cryptographic techniques. Goldman’s piece gestures at the outcome but skips the plumbing. It treats agents as if they simply appear because the market wants them.
Who builds them? Who audits them? Who gets sued when an agent misfires in a high‑stakes setting — not a restaurant booking, but a mortgage renegotiation or a medical appeal? Those aren’t edge cases; those are exactly the domains where a powerful agent would be most valuable. These are governance questions disguised as features.
History again offers a warning. Think back to the “personalization” wave of social media feeds. The pitch was: we’ll tailor your experience, just trust the opaque system ranking everything behind the scenes. It took years — and a lot of damage — before regulators and the public understood how those ranking choices shaped information, politics, and mental health. Personal agents risk replaying that pattern, but with a tighter grip on the intimate logistics of everyday life.
Defenders of the Goldman framing will say alliances and concentrated access speed innovation: pooled data, coordinated research, standard platforms. They’ll argue that the gigawatt ceiling forces badly needed efficiency, and that once the basics work, personal agents will spread quickly because the utility is obvious.
There’s some truth here. Coordination can reduce wasted effort. Scarcity does push optimization. But efficiency for whom? When coordination is orchestrated by market leaders, “efficiency” often means consolidation that nets them more margin and tighter control over the stack. Scarcity-driven optimization can feel like rationing from the outside. The practical effect is fewer, bigger platforms setting norms — technical, commercial, and social — for everyone else.
Goldman’s tidy triangle — agents, alliances, ceiling — leaves one side of the story underdeveloped: who gets to decide how these pieces fit together. By 2026, the most valuable asset in AI may not be raw compute or clever models, but the quiet contracts that fuse them to user data under terms nobody outside the alliance rooms ever really sees.