What Product Conversations Sound Like When Judgment Is the Real Skill
Across conversations about AI, metrics, and speed, a quieter theme is emerging: judgment hasn’t disappeared from product work—it’s just been displaced. What we choose to do with that matters.
The Moment That Keeps Repeating
Last week, during a remote usability session, a participant stopped mid-task and laughed—softly, almost apologetically. "I know this is probably faster," she said, hovering her cursor over the suggested option, "but I don’t trust it yet." She chose the longer path. It took her almost twice as long, and the system logged it as friction.
What struck me wasn’t the delay. It was the self-awareness. She wasn’t confused. She was making a judgment call.
I’ve been noticing versions of this moment everywhere lately—in research sessions, in internal product reviews, and even in the articles circulating through our feeds. Conversations about sales skills translating into product sense. About AI agents acting on our behalf. About metrics, terminals, automation, and speed. On the surface, these topics seem scattered. But underneath them, there’s a shared tension we’re not naming clearly enough:
We’re building systems that act faster than human judgment—while quietly relying on that judgment to make everything work.
That gap is where many of today’s product conversations actually live.
Skill Transfer Isn’t About Tools—It’s About Seeing People
Several of the most-read pieces this week came from people moving into product—from sales, from operations, from places that product culture once dismissed as "non-strategic." What these stories have in common isn’t a clever career pivot. It’s a reframing of what expertise looks like.
When someone says sales made them a better product manager, what they’re really saying is:
- They learned to listen for intent, not just requests
- They learned to sit with rejection without defensiveness
- They learned how often people agree verbally while hesitating behaviorally
In research, we see this all the time. Participants say yes, click next, rate something a four out of five—and still avoid it when it matters. Behavioral psychology has a name for this: revealed preference. What people do when the stakes are real tells you more than what they say when they’re being polite.
Salespeople are trained to notice that mismatch early. Many product teams aren’t.
Judgment develops in environments where feedback is immediate and human.
This is why cross-functional backgrounds feel newly relevant. Not because of process efficiency, but because they sharpen a person’s ability to read situations where the data hasn’t stabilized yet.
A 2023 Gong study found that top-performing sales reps ask 54% more clarifying questions than average reps. That instinct—to slow down, to probe before acting—is exactly what many AI-accelerated product teams are in danger of losing.
Automation Doesn’t Remove Work—It Moves It
Another thread running through recent writing is operational reduction: systems that cut owner involvement down to 10%, automated price validation, AI agents handling decisions that once required daily human input. These are impressive achievements. They also come with a quiet shift in responsibility.
In one B2B study I worked on last year, a logistics platform automated routing decisions that dispatchers used to make manually. On paper, efficiency jumped 18%. Error rates dropped. Leadership was thrilled.
But in interviews, dispatchers described a different experience:
- They spent more time monitoring than doing
- They felt responsible for outcomes they no longer controlled
- They struggled to explain system decisions to frustrated drivers
The work didn’t disappear. It became interpretive.
This aligns with research from the University of Cambridge on automation complacency, which shows that as systems take on more decision-making, humans are pushed into roles that require higher-level judgment with less contextual grounding. That’s a psychologically demanding place to be.
When products automate action, they must also support sensemaking.
Many don’t. They optimize for outcome metrics while leaving people alone with the emotional and social consequences of those outcomes.
Trust Breaks Aren’t Technical—They’re Experiential
One of the quieter but more important discussions this week asked: when AI gets it wrong, who does the user blame?
In my experience, users rarely blame "the model." They blame the product. Or themselves.
I saw this clearly in a financial planning tool that used AI to flag "unusual spending." Participants trusted it—until it flagged something deeply personal and contextually obvious to them, like a medical expense or a family emergency.
What changed wasn’t accuracy. It was interpretation.
After that moment:
- Users double-checked everything
- They hesitated before acting on recommendations
- They described feeling "watched" rather than supported
A Pew Research Center study in 2024 found that 62% of users lose trust in an AI system after a single unexplained error—even if subsequent performance improves. Trust, once shaken, doesn’t recover linearly.
This is why personalization didn’t get creepy—it got unbounded. Systems made judgments without communicating values, limits, or uncertainty.
Good judgment is legible.
It shows its work. It leaves room for disagreement. It signals when confidence is high and when it isn’t.
Metrics Are Loud. Judgment Is Quiet.
There’s been no shortage of articles about North Star metrics, execution-first startups, faster paths to market. Metrics matter. Speed matters. But metrics are only as good as the judgment framing them.
In research synthesis sessions, I often ask teams a simple question:
"Which number here would you defend to a confused user?"
The room usually goes quiet.
Because defending a metric requires more than calculation. It requires empathy, context, and moral clarity. It forces us to articulate why this outcome matters to a real person.
Here’s what I’ve learned to watch for when teams are over-relying on metrics:
- They celebrate movement without meaning
- They optimize away edge cases that represent real lives
- They treat hesitation as drop-off instead of signal
Judgment lives in those edges.
A McKinsey study often cited for its productivity insights also notes that high-performing teams spend 20–30% more time in sensemaking conversations before acting. Not planning. Sensemaking. Aligning on what the data means for humans.
What This Asks of Us Now
Across all these conversations—from sales-to-product journeys to AI agents and terminal-first workflows—I see a shared invitation.
Not to abandon speed. Not to reject automation. But to reinvest in judgment as a first-class product capability.
Practically, that means a few things I’m trying to hold teams accountable to:
-
Design for interpretation, not just action
If a system makes a decision, help people understand how and why—and when they should override it. -
Treat hesitation as data
Pauses, workarounds, and second-guessing often point to values conflicts, not usability flaws. -
Borrow skills from people-facing disciplines
Sales, support, operations—these roles are training grounds for judgment under uncertainty. -
Make uncertainty visible
Confidence gradients build trust better than false precision.
None of this shows up cleanly on a dashboard. But it shows up in whether people stay, whether they rely on what you’ve built, whether they forgive mistakes.
The Choice Beneath the Conversation
I keep thinking about that participant who chose the slower path. The system worked. The metric flagged friction. But she left the session feeling respected—because the product allowed her to decide.
That’s the quiet line we’re all walking right now.
We can build products that act faster than people think. Or we can build products that think with people, even when that takes longer.
The conversations flooding our industry aren’t really about tools, roles, or frameworks. They’re about whether we still see judgment as human work—or as something to optimize away.
From where I sit, the most resilient products of the next few years won’t be the fastest or the smartest. They’ll be the ones that know when to pause.
And make that pause feel intentional, not like failure.
Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.