Blame the Board, Not the Bot for Layoffs
Saying “AI didn’t fire you. The board did” is tidy. It’s also a dodge.
Look, the Lowy Institute piece lands a necessary punch: responsibility for firing decisions sits with human governance, not experimental code. Boards approve strategy, authorize cost cuts, and sign off on policies that let machines anywhere near personnel files. That’s not a philosophical stance; that’s how corporate law and fiduciary duty work. Keeping the spotlight on the board is useful because it reminds everyone where lawsuits, sanctions, and shareholder anger actually land.
Boards are the legal actors. Shareholders sue boards, regulators sanction them, and CEOs answer to them. If a workforce is restructured, the board authorized the plan. So yes, saying “AI didn’t do the firing” corrects the myth that technology is some autonomous moral agent roaming the org chart with a pink-slip cannon. I’ll be honest — a checkbox labeled “automate” doesn’t absolve anyone sitting around the mahogany table.
But here’s the thing: the article draws the circle of responsibility a little too tightly around that table and lets the rest of the system blur into the background. Decisions don’t just appear fully formed on a board agenda. They’re pre-cooked.
Procurement teams sift through vendors promising “objective assessments” and “defensible decisions.” HR builds workflows where performance scores and termination lists drop neatly out of dashboards. Legal signs off on risk-mitigation language that makes algorithmic recommendations sound like insurance policies. By the time the board sees a slide with “proposed headcount reductions,” the hard moral work has already been translated into metrics and model outputs.
The board still pulls the trigger; AI just makes it quieter.
Framing matters. If an executive walks into a meeting with a gut feel about layoffs, that’s contentious. If they walk in with a model that “optimizes workforce composition against strategic priorities,” suddenly it’s a spreadsheet problem. The AI doesn’t pull a metaphorical trigger, but it tightens the noose by making certain outcomes feel procedural, auditable, and therefore emotionally easier to approve.
Design choices become political choices long before they become board votes.
Algorithms encode trade-offs set by people with very specific incentives. Optimize a system for short-term cost-per-employee and guess what it finds. Optimize for retention of hard-to-replace institutional knowledge and you get a very different map of who’s “expendable.” The Lowy headline pushes boards into the spotlight; I’d nudge that spotlight upstream into architecture diagrams and vendor contracts, where values quietly harden into defaults.
You can see the same dynamic in how companies have used hiring algorithms. When tools replicate biased patterns from historical data, HR can claim they’re just “following the model.” Technically true, strategically convenient. The responsibility for those outcomes was distributed across product managers, data scientists, compliance teams, and yes, the board that approved the system as a cost-saving measure.
And then there’s culture. Boards that prize growth-at-all-costs or obsess over margins create perfect conditions for automation to become a moral fig leaf. In Silicon Valley, “efficiency through automation” has been pitched so relentlessly it might as well be printed on company hoodies. When everything is framed as an efficiency play, replacing human judgment with model output stops looking like a choice and starts looking like hygiene.
That’s not just a board vote; that’s a shared vocabulary failure.
When a recommendation comes from a model, it often arrives with a patina of inevitability. In practice, many directors are less comfortable overruling “the data” than overruling a CFO’s instinct. The Lowy argument is right to insist that this discomfort doesn’t dissolve accountability, but it underestimates how AI reshapes the psychology of the room. Power hasn’t moved from humans to machines; it’s shifted from explicit judgment to judgment wrapped in statistical language.
There’s a counter-argument that focusing this tightly on boards lets technologists and vendors off the hook — that the people building and selling these systems should carry more of the moral weight when their tools become instruments of displacement. That’s a fair concern, especially as AI vendors market “workforce optimization” as a feature, not a side effect.
But handing the rhetorical blame to algorithms creates its own mess. You can’t regulate away a board’s duty by pointing at lines of code, and you can’t meaningfully constrain AI in workplaces if the only people who feel exposed are the engineers. Contracts, disclosures, and due-diligence processes should be designed so that any AI-mediated layoff can be traced back to human names, specific approvals, and documented trade-offs. If that paper trail ends at “the system decided,” something’s already broken.
Think of Ursula K. Le Guin’s worlds, where technology and magic mostly serve to expose existing power structures rather than replace them. That’s closer to how AI actually behaves in boardrooms. We don’t have universal, enforceable laws of algorithmic governance; we have messy incentives, unclear liabilities, and risk-averse counsel drafting policies that turn moral judgments into compliance checklists. The Lowy piece is useful because it forces boards to face the uncomfortable question: not just who fired whom, but who is willing to be seen doing it.
History suggests where this is heading. When industrial automation hit factories, executives often blamed “modernization” for layoffs, as if the machines had barged in uninvited. Over time, regulators and unions learned to treat technology as a tool choice made by management, not fate. AI in the boardroom will go the same way: once courts, workers, and investors see the pattern enough times, “the model told us to” will read exactly like “the spreadsheet said we had to” — a story about priorities, not destiny.
A board that says “the algorithm made me do it” isn’t just ducking responsibility; it’s advertising a governance model that treats AI as cover. Sooner than they expect, that’s the part investors, regulators, and employees will zero in on.