The Work People Are Actually Asking For (Even When They Say They Want Growth)
Across growth, research, and product conversations, a deeper question keeps surfacing: where does belief come from when validation is thin? A reflection on conviction, care, and early signals.
The Question Behind the Question
Over the last day or so, I’ve been reading a familiar set of posts scroll by. Founders asking how to get their first users with zero distribution. Designers wondering if research is just a polite way to rubber‑stamp decisions. PMs doing everything “right” and still feeling like nothing is moving. Researchers quietly worried they’re becoming optional.
On the surface, these look like different problems. Growth. Process. Career anxiety. AI. Motivation. But the longer I sit with them, the more they blur into a single, quieter question:
What actually makes this work feel real—both to users and to ourselves?
As a product designer, I’ve learned that the most important signals are rarely the loud ones. They show up in hesitations, in misaligned effort, in the gap between what people ask for and what they’re actually trying to resolve. And right now, that gap feels especially wide.
What I’m seeing isn’t a lack of tactics or frameworks. It’s a deeper uncertainty about where belief is supposed to come from when metrics are thin, authority is shaky, and validation feels increasingly performative.
Validation Has Become a Stand‑In for Conviction
A lot of the current conversations orbit around validation:
- “How do I get users when I have zero distribution?”
- “How do I market from zero without shouting into the void?”
- “Is user research just used to justify decisions already made?”
These aren’t naive questions. They’re honest. But they reveal something important: we’re treating validation as the thing that gives us permission to believe.
I’ve watched this play out on teams.
Research gets commissioned not to learn, but to feel safer. Early growth experiments get framed as proof points rather than probes. Feature votes become moral arguments. Dark mode versus SSO isn’t really about prioritization—it’s about whose belief counts.
When belief is outsourced to numbers too early, we stop noticing what the numbers can’t yet say.
Here’s a data point that’s worth sitting with: according to CB Insights, 35% of startups fail because there’s no market need. That stat gets quoted endlessly. But what’s often missed is how many teams misread early ambiguity as lack of need. Early signals are weak by nature. They’re conversational, not statistical.
In early‑stage work, conviction has to precede validation, not the other way around. Otherwise every insight becomes fragile—easily overturned by the next loud opinion or metric bump.
What this looks like in practice
I worked with a small B2B SaaS team a couple of years ago. They were diligent—weekly user interviews, beautifully tagged insights in Dovetail, thoughtful decks shared across the company. And yet, every roadmap conversation reset to zero.
The issue wasn’t lack of data. It was that no one was willing to say: “Based on what we’ve seen, this matters more than the alternatives.”
Research was doing its job. Judgment wasn’t.
Growth From Zero Is Not a Distribution Problem
Several threads ask some version of: What would you do first if you had no audience, no brand, no distribution?
What’s striking is how often the advice drifts toward channels—Reddit, X, cold DMs, SEO, Product Hunt—without naming the harder part: exposure without armor.
Getting your first users isn’t primarily a marketing problem. It’s a design problem. Specifically, it’s about designing an interaction where:
- Someone understands what you’re offering
- They feel safe enough to respond honestly
- And you’re prepared to be changed by what they say
One founder shared that they were DMing strangers on LinkedIn, showing a video of their TestFlight app, and asking for early access. They expected nothing. What surprised them wasn’t conversion—it was kindness.
That’s not an acquisition hack. That’s a human moment.
A Nielsen Norman Group study found that early qualitative research with as few as 5 users can uncover ~85% of usability issues. We quote this stat a lot. But the real lesson isn’t efficiency—it’s proximity. Those five users aren’t data points. They’re collaborators in sense‑making.
What to avoid (even though it’s tempting)
From experience, here’s what consistently undermines early growth:
- Broadcasting before listening: talking at people instead of with them
- Scaling stories too early: polishing narratives before they’ve earned friction
- Mistaking silence for rejection: often it’s confusion, not disinterest
Growth from zero is slow because it’s relational. You’re not filling a funnel. You’re building confidence—yours and theirs.
When Research Becomes Theater, People Feel It
One of the most painful threads to read was the question: how often is user research just used to justify a decision already made?
If you’ve been in this field long enough, you know the answer is: too often.
But I don’t think this is because people don’t care. I think it happens because teams are under pressure to appear user‑led without being given the conditions to actually change course.
Research becomes a performance when:
- Timelines are fixed before learning begins
- Success metrics are defined too narrowly
- Insights are presented without ownership
People can tell when they’re being consulted versus when they’re being used.
There’s also a quiet career anxiety layered in here. Several researchers are openly wondering whether generative AI makes their work expendable. My honest take: bad research is easier to automate than good judgment. The threat isn’t AI—it’s organizations that confuse output with understanding.
A McKinsey report from 2023 showed that companies integrating user insights into decision‑making (not just reporting) were 2x more likely to outperform peers on growth metrics. The integration part is doing a lot of work in that sentence.
A small but meaningful shift
One practice I’ve seen help:
After research readouts, explicitly ask:
- What are we more confident about now?
- What are we less confident about?
- What decision does this force us to make—or delay?
If none of those change, the research didn’t fail. The process did.
Signal, Noise, and the Hunger for Certainty
Another thread described building an AI monitoring SaaS and realizing that more data reduced user confidence instead of increasing it. Too many alerts. Too many dashboards. Too much noise.
This resonated deeply.
Across roles—founders, PMs, designers—I’m seeing the same pattern: we’re drowning in signals but starving for meaning.
Feature requests pile up. Votes stack unevenly. Revenue pulls one way, volume another. Dark mode versus SSO becomes a proxy war for values.
Here’s the uncomfortable truth: prioritization isn’t a math problem. It’s an ethical one.
You’re deciding:
- Who you’re willing to disappoint
- Whose work you’re optimizing for
- What kind of relationship you want with your users
No framework resolves that tension. At best, it makes it explicit.
A design lens that helps
Instead of asking “Which feature wins?”, try asking:
- What belief does this reinforce for our core user?
- What behavior does it make easier—or harder?
- What future does it quietly commit us to?
These questions slow things down. That’s the point.
The Stalled Feeling Is a Signal Too
One post that stayed with me described doing all the “right” PM work—roadmaps, syncs, research loops—and still feeling like nothing was moving. No urgency. No energy. No shared belief.
I’ve been there.
When progress feels fragile, it’s often because the work has lost its narrative center. Not the pitch deck story, but the internal one: why this matters now, to these people.
Teams don’t burn out from too much work. They burn out from work that doesn’t resolve anything.
This is where design, research, and product leadership intersect in a very human way. Our job isn’t just to ship features or generate insights. It’s to help teams see clearly enough to commit.
That commitment is what users feel when a product makes sense. It’s what early adopters respond to. It’s what no amount of surface‑level validation can replace.
Coming Back to Care
If there’s a throughline in all these conversations, it’s this: people are asking for certainty in a phase of work that doesn’t offer it.
What is available is care.
Care shows up as:
- Willingness to sit with weak signals
- Courage to make decisions without applause
- Openness to being wrong in front of real users
As designers and researchers, we have a particular responsibility here. We’re often the closest to the moments where people hesitate, misunderstand, or quietly adapt. Those moments are easy to smooth over. They’re also where the truth lives.
I don’t think the community is lost. I think it’s tired of pretending that growth, research, and progress are mechanical.
They’re not.
They’re relational. Interpretive. Human.
And the work people are actually asking for—beneath the tactics, beneath the anxiety—is help learning how to stand inside that uncertainty with a little more honesty, and a little more care.
That’s not something AI can replace. But it is something we have to choose, again and again, to practice.
Alex leads product design with a focus on creating experiences that feel intuitive and human. He's passionate about the craft of design and the details that make products feel right.