AI heightens, not erodes, lawyers' critical thinking

Sarah Whitfield··Insights

The Thomson Reuters piece starts with a familiar tension: AI will both challenge and strengthen lawyers’ critical thinking. Fair enough. But once you strip away the reassuring tone, you’re left with a quieter, more uncomfortable question: whose thinking is being sharpened, and whose is being automated out of existence?

That’s the part the article grazes, then backs away from.

Law is a delegated craft. Clients buy outcomes. Firms sell time and predictability. Partners sell billables and reputations. So when Thomson Reuters writes about AI “challenging” and “supporting” lawyers, it isn’t just wading into a philosophical debate about judgment. It’s brushing up against the cash flows that shape how judgment is actually produced and priced. Follow the money.

The article is right on one key point: AI can strip out repetition and drudgery. Document review, first-pass research, template drafting — machines can do plenty of that faster than a tired associate at 1 a.m.

But here’s the sleight of hand: freeing time is not the same as improving judgment.

Ask what really happens when software takes over a chunk of junior work. Is that reclaimed time invested in mentoring — walking through why one argument is stronger than another, how to test a proposition against hostile facts, when to push a client and when to pull back? Or is it converted into more throughput — extra client alerts, more “value-add” memos, tighter turnaround demands? The article hints at opportunity but dodges the allocation question: where does the freed capacity actually go inside a firm’s economic engine?

Here’s what they won’t tell you: most firms don’t buy AI tools as an educational investment. They buy them as margin enhancers. Clients push for lower fees; partners push for higher profits. AI slides into that gap like it was built for it. The danger isn’t one spectacular hallucination on a brief. It’s the slow, quiet erosion of apprenticeship, replaced by workflow.

The article nods to bias and data quality, which is the standard checklist in any corporate AI explainer. But the framing is comfortingly narrow: better data in, better outputs out. That makes bias sound like a bug to be patched.

Bias in legal AI is often the opposite — it’s a mirror.

Systems trained on past cases, contracts, and filings will echo the blind spots of the legal record they ingest. If certain claims rarely succeed, if certain plaintiffs rarely win, if some harms are consistently under-litigated, a model that “learns” from that history will treat those patterns as normal. Unless someone intervenes.

Who does that intervention? Not vendors, who are rewarded for speed and functionality. Not cost-sensitive clients demanding instant answers. And not necessarily associates, who risk being evaluated on whether they hit deadlines and stay within budgets, not on whether they questioned the corpus under the hood.

That’s where professional responsibility collides with product design.

When an AI system surfaces a plausible but risky argument, the human lawyer stands as the last filter between pattern and precedent. The article urges vigilance. But vigilance is labor. Labor is billed. Follow the money again.

There’s a historical echo here the piece misses. When Westlaw and Lexis first spread through firms, they didn’t replace critical thinking; they expanded the universe of cases a lawyer could see, then left it to humans to sift, distinguish, and argue. Research got faster, but the interpretive work stayed in the lawyer’s hands because the tools were obviously tools.

Today’s systems feel different precisely because they don’t just retrieve; they synthesize. They don’t only show you the raw material; they propose the sculpture. That shift from “here’s what exists” to “here’s what you should say” is not just another iteration of legal tech. It’s a reallocation of who gets to frame the argument in the first place.

The article is right that legal education and continuing education have to change. But treating AI literacy as a discrete skill — another module on a CLE checklist — undersells the problem. What’s needed sounds far less glamorous: supervisors who demand chain-of-thought, not just clean prose; partners who ask, “Show me where this came from,” before signing their name; evaluation systems that reward associates for flagging model limits, not just for dressing up outputs.

That’s harder to sell than a software subscription.

There’s another gap in the Thomson Reuters piece: malpractice and liability. It raises ethics in a general way, but sidesteps the structural pressures that make risky shortcuts rational. If a lawyer repeatedly uses AI-assisted analysis that later collapses under scrutiny, today’s frameworks will still pin responsibility on the human. That’s as it should be for now. But as more decisions are made under economic pressure to “trust the tools,” the mismatch between accountability and influence will widen. The code writes the draft, but the signature carries the blame.

Supporters of AI in law will counter that these tools, used wisely, elevate the baseline. They’ll argue that by standardizing routine work, AI can reduce rookie errors and free senior lawyers to focus on higher-order strategy. Some of that is true. There are already practice groups that use AI to surface overlooked arguments or outlier cases — adding a second pair of eyes, not a first one.

But notice which firms can use AI this way: the ones with enough slack in their system to treat it as augmentation rather than substitution. The ones that can afford to keep mentoring time sacred instead of monetizing every saved hour twice over.

Thomson Reuters is right about one thing: AI will reshape critical thinking in law. If they’re right that these tools become standard issue, the real dividing line won’t be between firms that adopt AI and those that don’t. It will be between those that use it to protect the space where judgment is made — and those that quietly sell that space to the highest bidder.

Edited and analyzed by the Nextcanvasses Editorial Team | Source: Thomson Reuters

Disclaimer: The content on this page represents editorial opinion and analysis only. It is not intended as financial, investment, legal, or professional advice. Readers should conduct their own research and consult qualified professionals before making any decisions.