AI must serve the many, not just the affluent
AI won't fix access by itself. Fortune is right to argue that banks should use AI to serve more than the affluent—that’s a worthy ambition—but it treats AI like a lever you pull and a new market appears. The math doesn't lie: technology can cut unit costs, but it doesn’t rewrite incentives, clean up messy data, or magically reallocate risk. You can automate underwriting; you can’t automate trust, supervision, or the balance-sheet economics that make retail banking worth doing.
AI isn't a magic ATM
The piece is right about potential: smarter models can score nontraditional signals, automation can scale customer service, personalization can lower friction. Those are useful, but they’re table stakes, not transformation.
Models learn from history. If the training data is skewed toward affluent customers, the outputs will be, too. That’s not just an ethics seminar topic. It’s concrete business risk: discriminatory outcomes, scrutiny from regulators, and lasting reputational damage. Black-box credit scoring that quietly nudges underserved customers toward pricier, opaque products will look a lot like predatory lending once examiners and advocacy groups start comparing outcomes.
Privacy is another glossed-over cost center. Serving lower-income or thin-file customers often means tapping alternative data—payments apps, rent, utilities, gig-income flows. Who controls that data? Who actually gave informed consent? Expanding the surveillance footprint of finance in the name of inclusion is a political and legal gamble, not a free upgrade. Mishandle it and you don’t just have a customer-experience problem—you have a liability problem.
So yes, AI can expand access, but only if banks do the hard work of data governance: provenance, consent, retention, explainability. Those aren’t elegant features you bolt on; they’re infrastructure. And real infrastructure usually hits the P&L before it flatters the earnings call.
Banks won’t change unless the incentives do
Let’s be real: banks focus on affluent households because the economics tell them to. Larger balances, more fee-generating products, lower servicing friction per dollar—those are structural advantages, not cultural bias that can be trained away by an algorithm.
From my decade at Goldman, I can tell you: institutions don’t redeploy cost savings out of altruism. They follow return profiles. If AI trims operating expense—great—but that doesn’t automatically turn low-balance customers into attractive segments. The Fortune piece quietly assumes that every dollar saved from automation is a dollar redirected toward inclusion. That’s not a law of finance; that’s a hope.
The real constraint is capital and compliance. Deposit pricing, reserve requirements, capital charges, model risk oversight—that whole stack still applies, just with fancier models on top. If new AI-driven products aimed at underserved groups attract tighter regulatory scrutiny or higher perceived risk, they can easily end up with thinner margins than the legacy offerings they’re supposed to replace. The tech improves; the hurdle rate doesn’t.
The fintech “proof point” isn’t as clean as it looks
Supporters will argue that automated underwriting and robo-advisers already cut costs and expanded access, and that fintechs show you can build businesses around previously ignored customers. There’s some truth there.
But look at what many of those firms actually did: they targeted slices of the market where risk-adjusted yield looked attractive, built slick onboarding flows, and then either raised fees, narrowed the product set, or sold the best customers upstream to incumbents. The hard parts—serving volatile incomes, handling small balances without punitive fees, staying patient through credit cycles—tend to migrate back to the traditional banking system or get pushed onto consumers.
History rhymes here. Microfinance was once sold as the tool that would fundamentally reshape access. Instead, it became a viable but niche product that helped some borrowers while leaving the broader banking structure intact. AI risks following the same path: not a revolution, just another efficiency upgrade folded into an old profit model.
Regulation and governance are the real gatekeepers
Where Fortune underplays the story is the institutional scaffolding required to deploy AI at scale. If banks are going to use opaque models to decide who gets what, they need explainability, audit trails, and meaningful redress when things go wrong. Supervisors want to know why a loan was denied. Community organizations want to see that historical exclusion isn’t being quietly rebuilt in code.
That kind of transparency is slow and expensive. It pushes firms away from the most inscrutable black boxes and toward models they can defend under questioning. It also creates tension between “best predictive performance” and “defensible, fair, and understandable outcomes.” AI doesn’t remove trade-offs; it just moves them into the architecture of the system.
Policy can shift the calculus. Safe harbors for explainable models, tax incentives tied to serving thin-file or low-balance populations, data-sharing standards that let nonbanks compete—those are the sorts of levers that can make inclusion economically rational instead of reputational theater. Without them, you’re betting that goodwill and marketing pressure will beat quarterly earnings targets. That’s not a high-probability trade.
A tougher standard for “AI for everyone”
If we’re serious about AI expanding access, banks should be held to a more concrete checklist than “use AI” and “care about inclusion”:
- Define and publish minimum explainability standards for high-stakes products.
- Provide clear, plain-language disclosure of what data feeds into key models.
- Align incentives—through capital treatment, fees, or tax policy—so that serving low-balance, volatile-income customers isn’t always the first line item to get cut.
So yes, Fortune is right to argue that AI could help build a financial system that serves more than the affluent. But unless governance, incentives, and policy catch up to the technology, AI will mostly make existing winners more efficient at serving people who already have a seat at the table.