From AI Anxiety to Action: Rethinking Work
Reich is right to be alarmist about AI — and also a bit too tidy about causality. His Substack column, “AI and the Coming Jobless Economy,” hangs on a single clean diagnosis: AI will hollow out employment. That’s plausible. It’s not destiny.
Start with the part he gets right: technology cuts the cost of tasks. It automates where work is routine, scales where work is repeatable, and funnels profit toward whoever owns the platform. That’s exactly why his warning feels credible. But the argument treats “jobs” like a single bucket. They’re not. You get different dynamics in restaurants than in radiology, in warehouses than in municipal permitting. Treating labor as homogeneous makes the prognosis feel like prophecy instead of analysis.
The better frame is AI as accelerant, not final boss. Reich writes as if automation will flip a switch and reveal a “jobless economy.” It won’t. Adoption is a firm-level choice shaped by capital costs, regulation, customer tolerance for bots, brand risk, and how tight the local labor market is. Some companies will shed workers fast because their product is already digital and they can plug models straight into the workflow. Others will hit integration headaches, compliance constraints, or realize they still need humans to own the edge cases and the accountability.
So the likely path isn’t a synchronous collapse of employment. It’s uneven displacement: pockets of high churn and pockets of surprising stability. That heterogeneity matters more than his headline admits. You can’t plaster one generic “retraining” program over gig couriers, schoolteachers, and data-center technicians and pretend the uplift will be the same.
Reich sketches the usual remedies — more retraining, universal benefits, stronger unions — and then stops just before things get operational. That’s where the optimism gets thin. “Retraining” is not a policy; it’s a budget line plus a bureaucracy plus a timeline. Who designs the curriculum? Who certifies that the credential is real? How do you bridge income while someone retrains without turning a temporary tool into a sticky entitlement? Those are engineering and governance problems, not just moral ones.
From my Goldman days, you learn fast that incentives move behavior faster than mission statements. If tax codes and subsidies quietly make capital cheaper than labor, executives will tilt toward servers over salaries. If regulation makes firing easier than redeploying, more people get cut instead of retrained. Policy is already on the field here; it just isn’t admitting it.
Reich is stronger when he zooms out to macro risk than when he drills into who gets hit first. AI is not neutral. Expect gains to concentrate with platform companies, owners of proprietary data and models, and the usual suspects in finance and tech-adjacent services. The pain shows up in mid-skill, structured cognitive work: the spreadsheet jockeys, the forms processors, the people whose job can be expressed as “take standardized inputs, apply known rules, produce clean outputs.” Think back offices, not just factory floors.
And it will be place-based. Small towns tied to one large employer, service economies with thin margins, and public-sector offices that run on highly procedural workflows will feel the stress earliest. When your local government realizes it can clear permitting backlogs or process benefits faster with an AI stack plus a handful of supervisors, the headcount math gets ugly quickly.
That logic points toward targeted interventions, not blanket ones: wage insurance for mid-career workers whose roles vanish; relocation or commuting support for people bound to regions where demand is drying up; conditional subsidies or procurement carrots for firms that pilot AI as augmentation rather than immediate replacement. The politics of universal, permanent programs are brutal; tightly scoped, time-bound supports are more likely to survive long enough to matter.
Reich leans on the classic counter-argument too briefly: AI might create as many jobs as it destroys. That’s the standard Silicon Valley reassurance — machines free humans for “higher-order” work. Yes, new industries and job categories do emerge. But the time lag and skill mismatch are where reality bites. The fact that some future role might exist doesn’t pay this month’s rent when your current role gets automated.
Policy can compress that lag, but only if it’s engineered around actual pipelines. Think less “everyone gets a coding bootcamp,” more “tie community colleges and apprenticeship programs directly to local employers who commit to hire.” Think portable benefits that follow the worker across gigs, temp contracts, and short stints in training, instead of assuming a stable, single-employer path that no longer exists.
Reich also understates the political economy problem. The firms that benefit most from AI — the platforms, the major enterprise vendors, the consulting layers that sit on top of the stack — are not going to volunteer their margin to fund safety nets. Whether AI gains translate into public insurance or private dividends is a political fight, not a technocratic footnote. Expect aggressive lobbying against any policy that touches capital income, even as those same firms talk publicly about “inclusive AI.”
Here’s the deeper tension: the same levers that could fund protection can also slow change. Taxing capital more heavily to pay for social insurance nudges companies to delay or scale back automation. Subsidizing training without discipline lets weak firms off the hook and keeps unproductive capacity alive. The math doesn’t lie — money shapes the slope of the adoption curve.
Reich has drawn a vivid, useful tableau of what could go wrong. The next column needs to do the unglamorous work: name the levers, pick a side, and live with the trade-offs. That’s the difference between warning about a “jobless economy” and actually changing how many people end up without one.