Skepticism Needed: Trust Is Earned, Not Declared
Trust in AI can't be declared from the boardroom. McKinsey pushes a 'new era of trusted AI' led by insiders - so who watches the watchmen? Skepticism is needed: trust must be earned, not proclaimed.
McKinsey says a “new era of trusted AI” must be ushered in. By whom? By itself and its peers, apparently. Follow the money?
The argument is simple enough: established players like McKinsey & Company should help define what counts as “trust” in AI, because they sit in the boardrooms, speak the language of executives, and can scale standards fast. There’s a surface-level logic there. If you want companies to adopt safer practices, you talk to the people who already shape their decisions.
But when the bouncers start writing the fire code, you should probably read the fine print.
The article from McKinsey wraps this up in the language of stewardship. Trusted AI, it suggests, is a shared project led by responsible incumbents. That framing sounds reassuring, almost inevitable. Established firms will “usher in” trust, as if it’s a natural extension of their existing role rather than a new market they’re eager to dominate.
Here’s what they won’t tell you: defining trust is itself a commercial asset.
Once a firm persuades clients that its definition of “trusted AI” is the gold standard, it can sell the road map, the diagnostics, the audits, the training—an entire stack of services aligned to its own yardstick. Convenient, isn’t it?
The draft talks about trust as if it’s an attribute you can certify with a slide deck. But trust is a relationship, not a label. It’s built through transparency, contestability, and consequences when things go wrong. Who gets to decide whether an AI system is trustworthy—McKinsey, its clients, or the people subject to automated decisions who never saw the contract?
The article leans on an implicit pragmatism: big firms know how to move large organizations. That’s true. They can harmonize policies across industries, import familiar risk frameworks, sit executives down and translate abstract AI worries into line items. That coordination power matters.
But speed and reach do not equal legitimacy.
When the same institutions that advise on AI strategy, help select vendors, and design operating models also define what “trusted” means, the incentives line up in one direction: smooth adoption, minimal disruption, low friction for senior management. The hard questions—about who is harmed, who can object, who pays when things break—tend to move to the appendix.
That’s not a technical glitch. That’s governance by design.
A genuine regime of trusted AI needs adversarial elements: third-party verification, public reporting, channels for affected people to challenge outcomes, and remedies that actually bite. The McKinsey piece gestures toward stewardship by established players, but it’s notably quiet on what happens when the steward is also a vendor. Who pays for an independent audit that might contradict the consultancy’s earlier advice? Who explains to a client that “trusted” now means changing a profitable business process?
Follow the money.
Proponents of the McKinsey view will say: let experienced operators set the standards. They see across sectors, understand implementation constraints, and know how to turn vague worries into checklists executives will sign. Compared to abstract declarations from the sidelines, that sounds refreshingly concrete.
But expertise without counterweights becomes self-affirming.
If you give standard-setting power to the same ecosystem that monetizes compliance, you don’t get neutral rules. You get frameworks tuned to what is easy to sell and fast to roll out. Risk becomes a matrix, ethics becomes a slide, and trust becomes just another deliverable in a consulting engagement.
The article’s framing matters here. By positioning McKinsey and similar firms as natural “stewards” of trusted AI, it quietly narrows the field of legitimate voices. Suddenly, the center of gravity sits with board-facing advisors, not with those on the receiving end of automated decisions. The people who live with the consequences become a stakeholder group to be “engaged,” not co-authors of the rules.
That’s one way to structure a conversation. It’s also a way to control its outcomes before they’re even negotiated.
There is another route the article barely touches: public institutions setting binding baseline obligations and then hiring whoever they like—consultancies included—to help organizations comply. That flips the default. Instead of private actors defining trust and then persuading regulators to bless it, democratically accountable bodies write the minimum standards and penalties, and market players operate inside that frame.
Less elegant, more contested, slower. Which is exactly why it puts limits on any one firm’s ability to hardwire its commercial interests into the definition of trust.
Notice how the McKinsey narrative sits precisely one step short of real accountability. It champions leadership and frameworks, not liability. It offers to guide, not to guarantee. Trusted AI, in this telling, is something you can architect and manage—without necessarily putting your own firm on the hook when systems built under your guidance cause harm.
That gap is not an accident.
Trust can borrow tools from consultants—risk maps, implementation plans, performance metrics. But it doesn’t come pre-packaged in a report. It’s earned in public, across time, under pressure.
If McKinsey’s “new era of trusted AI” takes hold on its terms, expect to see trust defined in glossy diagrams long before it’s enforced in law.