What We’re Rehearsing Instead of Listening
As AI, metrics, and frameworks promise faster certainty, something quieter is happening in product work. A reflection on why listening — not rehearsing — still matters most.
The Moment That Made Me Pause
Last week, I watched a junior researcher run their first usability session with a tool they were genuinely excited about. Before the participant even joined the call, they told me, half-laughing, half-proud: “I already tested this flow with an AI. I know where it breaks.”
The session itself was fine. The participant completed the task. They hesitated where we expected them to hesitate. They even used language eerily similar to what the AI had surfaced earlier. When the call ended, the researcher looked relieved. Validated.
But something sat wrong with me. Not because the AI had been wrong — it wasn’t — but because the session felt less like an act of discovery and more like a dress rehearsal. We weren’t listening for surprise. We were listening for confirmation.
That small moment has stayed with me as I’ve read the past few days of product conversations. On the surface, they look diverse: AI as a research team, quiet retention tradeoffs, early death signals, saying no, building faster, simulating users sooner. But underneath, they’re circling the same tension:
Are we still making space to be changed by what we learn — or are we optimizing for feeling prepared?
When Simulation Becomes Substitution
There’s a real reason AI-powered “user simulation” is resonating right now. Teams are under pressure to move quickly, reduce cost, and show confidence earlier. Simulated feedback feels like momentum without risk.
And to be clear: there is value here. I’ve used AI to pressure-test interview guides, surface obvious edge cases, and articulate proto-personas when no one else is available yet. In early-stage work especially, it can help teams avoid walking blindly into their first conversations.
But I’m noticing a subtle shift in how it’s being framed — not as preparation, but as replacement.
What Simulation Can (and Can’t) Do
Simulation is good at:
- Reflecting existing patterns back to us
- Stress-testing assumptions we already know how to name
- Providing linguistic confidence (“users might say something like…”)
It is not good at:
- Revealing unarticulated discomfort
- Showing us where people contradict themselves
- Exposing the emotional cost of “almost working”
In a 2023 Nielsen Norman Group study, researchers found that while synthetic user data could replicate high-level usability issues, it consistently failed to surface motivational friction — the reasons users chose not to engage even when they technically could.
That gap matters. Because most products don’t fail at usability. They fail at meaning.
When we over-rely on simulated feedback, we start designing for coherence instead of resonance. The product makes sense. It just doesn’t land.
The Quiet Metrics Everyone’s Watching (and Misreading)
Several of the trending pieces talk about “quiet signals”: early churn, stalled activation, retention plateaus that don’t look dramatic enough to trigger alarms. These are real and important.
But here’s what I think we’re missing: those signals aren’t warnings about the product. They’re warnings about the conversation.
In my own work, the earliest indicator that something is off is rarely a metric spike. It’s a shift in how users talk — or stop talking.
What I Listen For Before the Numbers Drop
- Users completing flows but not elaborating on them
- Feedback that becomes politely abstract (“It works fine”)
- Fewer spontaneous stories, more evaluative statements
In one B2B product I worked on, retention held steady at ~92% for nearly two quarters. On paper, everything looked healthy. But in interviews, users had stopped volunteering context. They answered questions cleanly, efficiently — and without attachment.
Three months later, expansion revenue stalled. Six months later, churn followed.
The signal wasn’t hidden. It just wasn’t quantitative yet.
According to ProfitWell, a 5% increase in retention can increase profits by 25–95%. We quote that stat often. What we talk about less is how early retention erosion begins — long before cancellation, in moments of quiet disengagement.
AI can flag anomalies in behavior. It can’t tell you when someone is emotionally checking out.
The Comfort of Feeling Certain
Many of the current conversations — about tradeoffs, saying no, avoiding death signals — share an underlying promise: If you make the right calls early enough, you can avoid regret later.
I understand the appeal. Product work carries a lot of social and professional risk. Certainty feels like safety.
But some of the most important insights I’ve seen came from moments where the team felt deeply uncertain — even exposed.
A Small Story About Being Wrong
Years ago, I worked on a consumer finance app that tested beautifully in early research. Clear flows. Strong comprehension. High confidence scores.
Then, in a diary study, one participant wrote a single sentence that changed everything:
“I don’t like opening this when I’m already stressed.”
Nothing in our task flows had captured that. No simulated persona would have flagged it. It wasn’t about usability — it was about emotional timing.
We ended up redesigning not the interface, but the entry points: notifications, reminders, language tone. Engagement went up. Support tickets went down. But more importantly, users told us they felt less judged.
That insight required us to be surprised. And surprise requires risk.
What Real Research Still Demands of Us
As tools get faster and smarter, the work of research becomes less about execution and more about judgment.
Judgment looks like:
- Knowing when speed is helpful — and when it’s avoidant
- Recognizing confirmation disguised as validation
- Protecting space for being wrong in front of others
Practical ways I’m trying to hold that line:
- Using AI before research to sharpen questions, not answers
- Delaying synthesis just long enough to sit with discomfort
- Writing down what surprised me before looking at patterns
None of this is anti-technology. It’s pro-attention.
Because the risk isn’t that we’ll stop caring about users. It’s that we’ll start caring about efficiency more than presence.
Listening Is Still the Work
When I think back to that researcher last week, what I wish I’d said sooner was this:
Preparation is not the same as openness.
AI can help us rehearse. Metrics can help us monitor. Frameworks can help us decide. But none of them can replace the moment where a real person says something we didn’t expect — and we let it change us.
That’s still the work. Quiet, human, occasionally uncomfortable.
And if the current wave of product conversations is telling us anything, it’s this: the teams who will build things that last aren’t the ones who feel the most certain early.
They’re the ones who keep listening after certainty would be easier.
Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.