Ramaswamy's AI wealth pitch misses worker safeguards
Ramaswamy says workers can build wealth in the AI era — as if ownership were a how-to checklist and not a contested field of power. Here's the thing: telling people to “own” in a world where the most valuable assets are models, data sets and distribution systems is like telling fishermen to buy boats when the harbors are controlled by a few port authorities.
I actually like the basic instinct of the Wall Street Journal column: it pushes back against pure techno-doom and insists workers aren’t doomed spectators. That matters. A culture that treats workers as passengers is a great way to make sure they never touch the steering wheel.
But agency without access to the actual profit engines of AI is mostly vibes.
AI’s fat profit margins aren’t emerging from factory floors or local retail counters; they’re accruing to whoever controls the models, the training data and the cloud infrastructure — outfits like Google, Microsoft, Amazon and the crop of well-funded startups building on top of them. Telling workers to “build wealth” in that context, without explaining how they might reach those layers of the stack, is like handing out hiking tips at the base of a locked elevator.
There are concrete ownership models floating around policy circles: ESOPs, profit-sharing, employee equity, platform co-ops. They matter. But they’re not self-implementing.
ESOPs require firms to be willing — or compelled — to share equity. Profit-sharing depends on workers having enough bargaining power to demand a slice instead of a fixed wage. Co-ops need startup capital, patient investors and actual buyer networks. Historically, labor didn’t capture a share of industrial-era gains because everyone read a good op-ed; it came after drawn-out fights over laws, unions and corporate governance.
Neal Stephenson’s The Diamond Age gets this right in fiction: new technologies don’t magically level the field, they amplify whatever power structures exist when they show up. AI is running that script in high definition.
The column is on firmer ground when it leans into reskilling and adaptability. Sure, but look — skills are necessary, not sufficient. Retraining is a high-friction bet: you need time off, money, childcare, transportation, sometimes a willingness to move cities. “Learn AI skills” sounds empowering on a panel stage; it sounds different if you’re a mid-career grocery worker juggling shifts and rent.
Even for those who can retrain, the landing spots aren’t limitless. Jobs that complement AI — think system integrators, prompt engineers, domain experts who can wrangle models — tend to cluster in specific regions and industries. If we’ve learned anything from the software boom, it’s that new jobs arrive unevenly in space and time. The Bay Area gets a wave of opportunity; the town whose biggest employer just automated scheduling gets a webinar.
AI will spin up new occupations and wipe out others. That isn’t a contradiction; it’s labor-market churn. The question is whether people can realistically traverse that churn.
Successful transitions need actual institutions: portable benefits so people can jump between gigs and training; income supports so retraining doesn’t mean a financial cliff; local training tied to real hiring pipelines, not just online certificates; and incentives for firms to share upside with the people who generate the data and do the integration work. Those are questions for policy and collective bargaining, not just personal grit and LinkedIn hustle.
Where the column really underdelivers is on ownership design. If the thesis is “workers can build wealth,” then the follow-up has to be: through which mechanisms, under which rules, with whose consent?
Mandatory profit-sharing is one path — controversial, sure, but at least it’s specific. Tax incentives for broad-based employee ownership is another: not just stock options for executives, but structured equity for line workers. There’s also the frontier idea: royalty or licensing payments when AI trained on user or community data generates revenue, especially in sectors like healthcare and logistics where data quality is a decisive input.
I'll be honest — trusting private philanthropy and corporate voluntarism to handle this is like expecting a Terms of Service update to protect your privacy. Voluntary equity programs routinely skew toward higher-paid, higher-skilled employees; the people at the bottom of the org chart get symbolic grants or nothing at all.
So if we care about broad-based wealth, the incentives and rules have to be engineered that way. Think long-term tax credits for employee equity that’s widely distributed, not concentrated; legal frameworks that make it less painful for worker cooperatives to raise capital and bid for AI-enabled contracts; data-dividend structures when personal or community-generated data materially helps train profitable models. None of this is plug-and-play — but all of it changes where the AI rent actually lands.
Here’s a useful historical parallel the column skips: when railroads transformed the economy, early investors and a small set of magnates captured outsized gains. It took years of regulation, antitrust action and new forms of worker organization before the benefits were even partially socialized. We’re replaying that, except the tracks are now invisible cloud APIs and proprietary datasets.
You can already see proto-examples of alternative ownership in AI-adjacent firms. Some smaller tech companies have experimented with broad equity grants or profit pools tied to product milestones, giving customer-support and operations staff a real stake in automation tools they help roll out. These are small-scale, messy and imperfect — but they’re closer to what “ownership in the AI era” looks like than generic investment advice.
A modest counter-argument in defense of the column: optimism is not useless. Narratives that celebrate worker entrepreneurship and insist that employees ask for equity do move the Overton window. More people will try to start firms, more venture capitalists will at least hear out worker-led models, more employees will negotiate for stock or profit-sharing because pieces like this say they should.
But narratives don’t vaporize structural constraints. They sit on top of them.
The interesting challenge — and the one the WSJ piece mostly glides past — is how to weld that optimistic rhetoric to specific institutions: apprenticeships linked directly to AI-heavy employers, stronger collective bargaining in logistics, healthcare and other AI-adjacent sectors, clear and enforceable rules around data rights and usage. That’s the boring scaffolding beneath the shiny story.
Look, I’m all for arguments that refuse to treat workers as collateral damage in an AI gold rush; I’ve seen enough counterexamples to know that technology can democratize upside. But history, from the rail barons to the information giants, is pretty blunt: when new infrastructure shows up, wealth doesn’t trickle — it pools. If Ramaswamy’s wager is that workers can swim in that pool, the real test will be whether anyone is willing to drain a bit from the deep end.