When the Signal Is Noisy: Leading Through Product Contradiction
AI can cluster feedback. Interviews can contradict. Usability tests can flip a roadmap overnight. In a noisy product world, judgment—not just data—is the real differentiator.
Last week, I sat in a backlog review where three very smart people were each holding a different “obvious” priority.
Customer success had transcripts from five calls that all pointed to the same missing feature. Growth had funnel data showing a 22% drop-off in onboarding tied to something else entirely. Meanwhile, a usability test from two days earlier suggested neither issue was the real friction — users were confused by our navigation before they ever reached the feature in question.
No one was wrong. That’s what made it hard.
If you’ve been following the current conversations in product circles, you can feel the tension. We’re talking about AI tools that promise to prioritize backlogs for us. We’re debating what to do when user interviews contradict each other. We’re celebrating stories where a handful of usability tests flipped an entire roadmap. And in the same breath, we’re encouraging founders to “just call customers” and validate in days.
Underneath all of it is a shared anxiety: When the inputs multiply and the answers conflict, how do we decide with integrity?
As a product strategy consultant, I spend a lot of time in that exact moment — where the signal is noisy and the team is looking for clarity. What I’m noticing lately isn’t a lack of data. It’s a lack of shared judgment about how to weigh it.
The Illusion of a Single Source of Truth
AI-powered prioritization tools are having a moment. And I understand why.
The average product manager today is juggling:
- Dozens (sometimes hundreds) of open feature requests
- Continuous analytics dashboards
- Sales and customer success escalations
- Qualitative research insights
- Leadership bets and strategic mandates
In a recent Product School survey, over 60% of PMs said prioritization is the most stressful part of their role. That tracks with what I see in the field. The cognitive load is real.
So when a tool promises to analyze feedback themes, weigh impact scores, and suggest what to build next, it feels like relief.
But here’s the quiet risk: there is no neutral backlog.
Every prioritization framework encodes values. RICE, MoSCoW, opportunity scoring — they all privilege certain dimensions over others. When we hand that logic to an AI model trained on historical decisions, we’re not removing bias. We’re freezing it.
I worked with a B2B SaaS team last year that implemented an AI feedback clustering tool. It surfaced a clear pattern: a frequently requested integration. On paper, it had high volume and strong revenue upside.
But when we dug deeper, we saw something the model couldn’t:
- The requests were concentrated among a small segment of power users.
- That segment already had 95% retention.
- Meanwhile, new customers were churning in their first 30 days because they didn’t understand the core workflow.
The model optimized for loudness. The strategy needed to optimize for leverage.
AI can surface patterns. It cannot decide what kind of company you’re trying to be.
When Interviews Disagree (And That’s the Point)
Another thread I’ve seen: designers wrestling with interviews that produce wildly different responses.
One user says the dashboard is “intuitive.” Another calls it “a maze.” One begs for more customization. Another feels overwhelmed by options.
The instinct is to reconcile — to search for the average truth.
But averages flatten reality.
In research synthesis sessions, I often ask teams to map feedback along two axes:
- User context (experience level, urgency, environment)
- User goal (what they were actually trying to accomplish)
When you do this, contradictions often dissolve into segmentation.
A recent example: a fintech client heard polarized feedback about their reporting feature. Advanced users wanted granular controls. New users wanted simplicity.
The team initially treated this as a design problem — “What’s the right balance?”
It turned out to be a positioning problem.
Their onboarding flow wasn’t clarifying who the product was for at each stage. Everyone was being dropped into the same experience. The “contradiction” was really two different jobs-to-be-done colliding in one interface.
According to Nielsen Norman Group, five usability tests can uncover around 85% of usability issues — within a specific user group. The part we often forget is that the math changes when you mix groups.
When interviews disagree, it’s rarely because users are irrational. It’s because we’re collapsing distinct contexts into one story.
The work isn’t to eliminate contradiction. It’s to locate it.
Validation in Days, Not Months — and What That Really Requires
I love the renewed push toward calling customers early. Too many teams still hide behind prototypes and dashboards when a 20-minute conversation would surface the truth.
CB Insights famously reports that 35% of startups fail because there’s no market need. Not bad code. Not poor marketing. No need.
Speed matters.
But here’s what I’ve learned: fast validation only works if you’re clear about the decision you’re trying to make.
I once advised a solo founder building a vertical SaaS tool. He had read all the right advice — don’t overbuild, validate quickly, talk to customers.
He booked 15 calls in a week. Impressive.
But when we reviewed the notes, they were a mix of feature ideas, polite encouragement, and vague enthusiasm. No clear pattern.
The issue wasn’t effort. It was framing.
Before the next round of calls, we clarified three hypotheses:
- This problem occurs at least weekly for our target user.
- They currently solve it in a way that costs time or money.
- They would pay at least $X to remove that friction.
Suddenly the conversations sharpened. Instead of “Would you use this?”, he asked, “When was the last time this happened?” and “What did you do?”
Within ten days, the signal was clear. The problem was real — but only for a narrower segment than he expected.
Validation isn’t about speed alone. It’s about disciplined curiosity.
When a Few Tests Change Everything
One of my favorite kinds of stories is when a small set of usability tests reshapes a roadmap.
Not because it’s dramatic. But because it’s clarifying.
In a growth-stage SaaS company I worked with, leadership was preparing to invest six months into a major feature expansion. It had been on the roadmap for over a year.
Before committing, we ran six moderated usability sessions focused on the existing workflow.
What we discovered was humbling.
Four out of six participants never reached the advanced feature set the expansion was meant to enhance. They were getting stuck earlier — confused by terminology and unsure of the sequence of steps.
In analytics, overall feature adoption looked “moderate.” But session recordings revealed repeated backtracking and hesitation. The feature wasn’t underpowered. It was under-understood.
We postponed the expansion. Instead, we:
- Simplified the primary navigation
- Rewrote onboarding microcopy in plain language
- Added a contextual walkthrough
Within two months, activation rates increased by 18%. Support tickets related to that workflow dropped by nearly a third.
No new features. Just clarity.
The lesson wasn’t “always test before building.” We all know that.
The deeper lesson was this: roadmaps drift when they’re built on assumptions that never get revisited.
A few honest sessions can realign months of planning.
Leading Through the Noise
So where does this leave us?
We have more tools than ever to collect, cluster, and analyze input. We can launch faster. Test cheaper. Synthesize at scale.
And yet the core challenge remains profoundly human: making a decision when the evidence is incomplete and sometimes contradictory.
Here’s the pattern I’m seeing across teams that navigate this well:
1. They separate signal gathering from decision-making.
They let research be messy. They let dashboards be exploratory. But when it’s time to decide, they articulate the criteria clearly — impact on strategy, target segment, long-term leverage.
2. They name the trade-offs explicitly.
Instead of pretending the data “made the decision,” they say: “We’re choosing to optimize for new user activation over power-user depth this quarter.”
Clarity reduces resentment.
3. They revisit assumptions on a cadence.
Not every week. But regularly.
What was true six months ago may not be true now. Markets shift. Segments evolve. Your own capability changes.
The most effective product leaders I know treat strategy as a living hypothesis, not a fixed declaration.
Data informs. Research reveals. Judgment decides.
That judgment isn’t mystical. It’s built from experience, context, and a clear sense of who you’re serving.
When I look at the current discourse — AI prioritization, conflicting interviews, rapid validation — I don’t see confusion. I see a community grappling with scale. We’re trying to honor user reality without drowning in it.
And that’s a good tension to have.
Because underneath the dashboards and transcripts and roadmaps are real people. The founder hoping this works. The PM trying to do right by their team. The user who just wants their job to be a little easier.
The signal is noisy. It always has been.
Our job isn’t to eliminate the noise. It’s to develop the kind of judgment that can move through it — carefully, transparently, and with care for the humans on both sides of the screen.
Jordan helps product teams navigate complexity and make better decisions. She's fascinated by how teams balance user needs, business goals, and technical constraints.