The Comfort of Frameworks — and the Conversations They’re Replacing
Frameworks like RICE bring clarity to product decisions. But when numbers start replacing judgment, we risk losing the human insight that actually drives meaningful innovation.
Last week, I sat in on a prioritization meeting that could have been pulled straight from one of the many RICE-matrix explainers circulating right now.
The spreadsheet was immaculate. Reach, Impact, Confidence, Effort — neatly scored, color-coded, formula-validated. Within twenty minutes, the team had a ranked list of features.
And yet, as the meeting ended, one of the designers lingered behind. She hesitated, then said quietly, “I know it scored low, but every interview we’ve run — people’s faces change when they talk about that problem. It’s bigger than what we captured.”
That moment has stayed with me.
Over the past few days, I’ve watched the product community orbit around similar themes: frameworks for prioritization, posts about why 95% of products fail without research, debates about whether AI can run discovery for us, promises that innovation is “just technique.” There’s a clear throughline: we are looking for reliable methods in an unpredictable world.
And I understand the impulse. After years in UX research, I’ve seen how destabilizing ambiguity can be — especially when deadlines loom and stakeholders are waiting. Frameworks promise clarity. Technique promises repeatability. Scoring systems promise fairness.
But here’s the deeper question I keep coming back to:
When does structure support judgment — and when does it quietly replace it?
The Psychological Safety of a Score
If you’ve ever presented research that challenged a roadmap, you know the room can shift. Priorities are rarely just about users; they’re about revenue targets, political capital, sunk costs, personal bets.
A framework like RICE does something powerful in that context: it creates shared language. Instead of arguing abstractly about what “matters,” teams can point to numbers. It feels objective.
There’s research to support why this feels so reassuring. Studies in behavioral psychology show that humans consistently prefer quantified decision-making tools, even when they know the inputs are subjective. In one experiment published in Organizational Behavior and Human Decision Processes, managers favored algorithmic recommendations over human judgment — not because they were more accurate, but because they felt more defensible.
That word is important: defensible.
Frameworks don’t just help us decide. They help us justify.
And justification reduces social risk.
In the meeting I mentioned, no one had to say, “I personally think this is less important.” The spreadsheet did the talking. The discomfort was outsourced.
But here’s what we rarely acknowledge: every RICE score is a story wearing numbers.
- Reach depends on assumptions about who counts.
- Impact depends on what outcomes we value.
- Confidence depends on how much we trust our evidence.
- Effort depends on how well we understand complexity.
Each input reflects judgment. The math simply compresses it.
When we forget that, we risk mistaking tidy outputs for truth.
The 95% Failure Statistic — and What It Really Points To
Another conversation gaining traction right now is the claim that 95% of new products fail because they solve problems that don’t exist.
The exact percentage is debatable — different reports cite figures between 70% and 90% depending on industry — but the pattern is real. CB Insights’ analysis of startup post-mortems found that 35% cited “no market need” as the primary reason for failure. That’s not a small signal.
So we double down on process. More validation. More discovery sprints. More structured interviews. Sometimes even AI-driven research summaries.
And to be clear: I believe deeply in strong research processes. I’ve built them. I’ve defended them. I’ve watched them save teams from expensive missteps.
But process alone doesn’t prevent failure.
I once worked with a team that conducted over 40 interviews before launching a new reporting feature. The transcripts were thorough. Themes were meticulously coded. The opportunity size was estimated with care.
Six months after launch, adoption was under 10%.
When we went back into the field, the issue wasn’t that the problem didn’t exist. It did. The issue was emotional.
Users associated reporting with scrutiny. Generating reports felt like exposing mistakes. No framework had captured that subtle tension because we’d focused our research questions on functionality and workflow — not vulnerability.
We had validated the task. We hadn’t understood the psychology.
No prioritization matrix would have surfaced that nuance. It required sitting across from someone and noticing the pause before they said, “I guess I don’t really use it unless I have to.”
That’s the kind of insight that doesn’t fit neatly into a cell.
Technique Is Not the Same as Sensitivity
There’s a growing refrain that innovation is technique, not magic. I agree — to a point.
There are absolutely repeatable skills in product work:
- Framing clear problem statements
- Running structured interviews
- Synthesizing qualitative data systematically
- Prioritizing with transparent criteria
Technique reduces chaos. It builds shared understanding. It prevents us from reinventing the wheel every sprint.
But technique is not the same as sensitivity.
Sensitivity is noticing when a participant’s tone shifts.
It’s recognizing that a low-confidence score isn’t just about data gaps, but about political hesitation.
It’s sensing when a team is hiding uncertainty behind numbers because saying “we don’t know” feels too risky.
You can’t automate that.
One recent article described letting AI run user research for a week, only to discover where it “failed spectacularly.” I wasn’t surprised. Large language models are powerful summarizers. They’re efficient pattern detectors. But they don’t feel the weight of silence. They don’t detect discomfort in a room.
And in my experience, some of the most important insights live exactly there — in the unsaid.
When we lean too heavily on technique, we risk optimizing for efficiency over understanding.
Frameworks as Conversation Starters — Not Conversation Enders
So what do we do with all of this? Abandon structure? Reject scoring models? Ignore validation processes?
Of course not.
The answer isn’t less rigor. It’s more honest use of it.
Over the years, I’ve found a few practices that shift frameworks from decision-replacers to decision-supporters:
1. Make Assumptions Explicit Before Scoring
Before filling in a RICE matrix, ask:
- What evidence is this score based on?
- What would change our mind?
- Who might disagree with this estimate — and why?
Write those down next to the number. Treat the score as a hypothesis, not a verdict.
2. Separate Evidence from Interpretation
In research readouts, I now deliberately distinguish between:
- What participants did or said (observable data)
- What we think it means (interpretation)
This slows the rush to conclusions and reminds everyone that insight requires judgment.
3. Leave Space for the “Unscorable”
In prioritization sessions, I often ask one final question:
Is there anything that feels important but didn’t score well?
That invitation matters.
Sometimes the answer is no. But sometimes it opens a conversation about brand trust, emotional friction, or long-term positioning — factors that resist tidy quantification but shape product success profoundly.
4. Normalize Uncertainty Publicly
When leaders say, “Based on what we know today, this is our best call,” it does something subtle but powerful. It frames decisions as evolving rather than absolute.
In my experience, teams that can tolerate uncertainty make better long-term bets than teams that hide it behind polished dashboards.
The Human Cost of Over-Optimization
There’s one more layer to this conversation that feels important.
When we prioritize purely by reach and impact, we implicitly define which users matter most.
If a feature serves a smaller, vulnerable segment, it may never win in a scoring model. And yet, supporting that segment might define the product’s integrity.
I once worked on a financial tool where a small percentage of users relied on accessibility features to manage their accounts independently. Enhancing those features scored low in reach. But during one session, a participant told me, “This is the only way I can check my balance without asking my son.”
That sentence never appeared in the RICE spreadsheet.
But it reshaped our roadmap.
Because product decisions aren’t just economic calculations. They’re value statements.
Frameworks can clarify trade-offs. They cannot determine what we care about.
That remains a human responsibility.
Holding Structure and Judgment Together
What I’m seeing in the current wave of conversations isn’t a community obsessed with shortcuts. It’s a community searching for stability. We’re building faster than ever. AI accelerates execution. No-code tools compress timelines. The pressure to ship is relentless.
In that environment, frameworks feel like anchors.
And they are — when used well.
But anchors can also keep us from adjusting to shifting tides.
The real craft of product work, at least as I’ve experienced it, lives in the tension between structure and sensitivity. Between scoring and sensing. Between technique and care.
It’s the moment after the meeting, when someone says, “This doesn’t quite capture what I’m seeing.”
It’s the willingness to revisit a confident decision when new insight emerges.
It’s remembering that behind every impact estimate is a person trying to get through their day a little more smoothly.
Frameworks make our work clearer. They make it more communicable. They help teams move.
But they don’t absolve us from the harder task: choosing what matters, and owning that choice.
As product builders and researchers, we don’t just design features. We design priorities. And those priorities shape people’s experiences in ways no matrix can fully predict.
The spreadsheet may rank the options.
But the responsibility — and the humanity — is still ours.
Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.