Why Tech Quality Alone Won't Save Your Practice
Kitces says tech that prioritizes quality boosts productivity — here’s the thing: that claim is both sensible and dangerously vague. The wealthmanagement.com piece nails a central truth for advisory firms: sloppy tech multiplies work. But the column leaves open what “quality” actually means, who pays for it, and whether a quiet, measured approach to engineering always beats rapid, iterative deployment. I’ve seen both outcomes in Silicon Valley.
Let’s start with what the article gets right: advisors don’t actually care about “innovation” in some abstract sense. They care about fewer headaches. When custodial tools stop throwing random reconciliation issues, when integrations actually sync the fields they promise, when client data doesn’t mysteriously fork itself into three versions — that’s when productivity shows up as something other than a talking point on a vendor one-pager.
But “quality” is a slippery metric. The word sounds objective; it isn’t. You might mean accuracy of client data, stability of integrations, UX polish, regulatory compliance, or the depth of an advisor-facing workflow. Those are wildly different projects with wildly different costs and timelines. The headline “Kitces: Tech Focus on Quality Boosts Productivity” is fine as a thesis, but the column treats quality as self-evident. It’s not.
Take two scenarios. If a custodian invests to eliminate reconciliation errors, advisors get time back and client outcomes plausibly improve. If a portfolio-reporting vendor spends the same development effort on prettier PDFs, you may see fewer support tickets, but you don’t automatically get better advice or safer accounts. Both teams can claim they “improved quality.” Only one touched the advisor’s real workload.
This is where governance stops being a buzzword and starts being a survival tactic. Product teams need crisp KPIs that map code changes to advisor time saved or error reduction: fewer exceptions per account, fewer manual journal entries, faster resolution of client questions. Without that link, “quality” becomes a PR tax on the roadmap — a label you slap on any initiative that looks nice in a demo.
Yeah, no, this isn’t a new problem. William Gibson’s Neuromancer gave us shimmering cyberspaces whose elegance said nothing about whose interests they served. Modern wealth-tech is similar: a beautiful client portal can absolutely coexist with brittle plumbing underneath. Aesthetics can hide sloppiness if you don’t measure what actually breaks.
Here’s the thing: there’s another awkward angle. A quality-first strategy tends to favor firms that can afford it. Long development cycles, deep QA benches, and multiple non-production environments are easy to champion if you’re a large custodian or a well-capitalized fintech. Much harder if you’re a small RIA building light workflow automation on top of off-the-shelf tools.
Kitces frames productivity gains as an industry win, but there’s a distributional effect lurking in the background. Incumbents can consolidate advantages by shipping higher-quality integrations and data feeds that make switching costlier. “Quality” becomes both a feature and a moat. That’s great if you’re inside the walled garden; less great if you’re trying to build something new that plugs into it.
And then there’s speed. Critics of quality-first development will say: clients want new capabilities, yesterday. A “good enough” release beats waiting for some Platonic ideal of perfection. Sure, but that argument assumes releases only deliver upside. In wealth management, bad releases create downstream work: reconciliations, client correction letters, and compliance clean-up that swamps any marginal delight from a shiny new widget.
The trade-off isn’t speed versus quality; it’s uncontrolled velocity versus disciplined experimentation. Feature flags, sandbox environments, and staged rollouts aren’t luxuries — they’re exactly how you ship fast without treating client portfolios as your QA department. The wealthmanagement.com piece hints at this, but it doesn’t really dig into the mechanics. That’s where the productivity story either becomes real (because you can safely iterate) or stays aspirational (because every change risks a mess in the back office).
There’s also a cultural piece that technology columns chronically underplay. Engineers don’t deliver advisor-relevant quality alone. Product managers, compliance teams, operations staff, and front-line advisors have to co-own what “good” looks like. Otherwise you get exquisitely tested features that nobody uses, and advisors quietly sliding back to spreadsheets and email templates because those better match their mental models.
Look at how Salesforce or Microsoft succeed in financial services: not because their software is universally loved, but because they pour effort into configuration, admin tooling, and integration patterns that map to messy real-world workflows. They’re not just selling quality code; they’re selling quality fit. Those are different things, and only one shows up directly in a developer’s unit tests.
So the connective tissue here is pretty simple: if “quality” doesn’t map to advisor toil, it’s set dressing. Kitces is right to push back on the “move fast, break things” reflex that never made sense under a fiduciary standard. But as an operating principle, “focus on quality” only works if firms define the specific dimensions that cut error rates, reduce manual cleanup, and keep advisors out of spreadsheet purgatory — and then measure those, ruthlessly.
Anything else is just a nicer demo script.