The Work That Doesn’t Show Up on the Roadmap Is Eating Our Margins
Across conversations about margins, AI, and speed, a quieter cost keeps surfacing in research: the price of asking users to live with uncertainty — and how that hesitation slowly eats our products from the inside.
The Moment That Made It Click
Last week, I watched a founder scroll through their own product during a research session. Not a demo. Not a pitch. Just them, quietly trying to complete a task the way a new customer would.
They got stuck on a permissions screen. Nothing was technically broken. The copy was clear. The flow was correct. Still, they hesitated. They read every line twice. Then they looked up and said, almost apologetically, “I guess I’m just trying to understand what I’m agreeing to.”
That pause — that small, human pause — is what I keep thinking about as I watch the conversations unfolding in our community right now. About margins quietly leaking away. About selling before building. About AI features racing ahead of trust. About testing that looks great in theory and collapses in practice.
Because what I’m seeing underneath all of it is this: we are dramatically underestimating the cost of asking people to feel uncertain.
Not confused. Not dissatisfied. Uncertain.
And uncertainty doesn’t show up neatly on dashboards.
Margins Don’t Just Leak — They Evaporate Through Doubt
There’s been a lot of talk this week about SaaS margins — how teams finally hit product-market fit, only to realize they’re losing money anyway. Rising infrastructure costs. Support overhead. Security spend. AI inference fees. All real. All painful.
But in research, I keep seeing a quieter contributor to margin erosion: products that technically work, but emotionally wobble.
When people don’t quite trust what’s happening, they compensate in ways that cost companies real money:
- They contact support instead of self-serving
- They slow down adoption, extending onboarding periods
- They underuse features they’re already paying for
- They churn later, after consuming disproportionate resources
One B2B SaaS team I worked with last quarter had stellar activation metrics. Over 80% of new users completed onboarding within the first day. On paper, it looked great.
But when we dug into session replays and interviews, a pattern emerged. Users were completing steps — but double-checking everything. Exporting data “just in case.” Keeping parallel spreadsheets. Avoiding automation features that could have saved hours.
The result?
- Power users generated 3× more support tickets than expected
- Infrastructure costs scaled faster than revenue
- Expansion revenue lagged despite high usage
Nothing was broken. But nothing felt settled either.
Trust work is margin work — even when finance dashboards don’t label it that way.
Selling Before Building Still Requires Someone to Believe You
Another theme surfacing right now is the push to sell before building. Validate demand. Get commitments. Don’t waste cycles.
I agree with the intent. I’ve seen too many teams pour care into products no one asked for.
But here’s what doesn’t get said enough: early selling isn’t a shortcut around trust — it’s an early test of it.
In interviews with founders trying to pre-sell, I often hear frustration framed like this:
“People say they’re interested, but they won’t commit.”
When we listen closely to the calls, the hesitation usually isn’t about price or scope. It’s about believability.
Not whether the founder is honest — but whether the future being described feels solid enough to step into.
People ask questions like:
- “What happens if this breaks?”
- “How manual is this behind the scenes?”
- “Who’s actually responsible if something goes wrong?”
These are trust questions wearing operational clothing.
And they’re the same questions users ask later — silently — inside the product.
Selling before building works best when teams:
- Name the uncertainty instead of smoothing it over
- Show where human judgment still exists
- Make clear what’s automated and what isn’t
Ironically, acknowledging what’s unfinished often increases confidence. People relax when they know where the edges are.
Why Dev-Owned Testing Keeps Breaking Down
The Hacker News conversation about dev-owned testing struck a nerve for me, because I’ve watched this play out across organizations.
In theory, it’s elegant. Developers write tests. Quality improves. Feedback loops tighten.
In practice, what often happens is subtler.
The tests pass. The system works. But the experience frays.
Why?
Because most automated tests are excellent at validating correctness — and terrible at detecting hesitation.
They don’t see:
- The moment someone rereads a warning
- The instinct to open a second tab “just to be safe”
- The decision to delay an action until tomorrow
Those moments live in human judgment, not system state.
In one internal tool I studied, error rates dropped to near zero after a testing overhaul. Leadership celebrated.
Two months later, usage quietly declined.
In interviews, employees said things like:
“I trust it to work. I’m just not always sure it’s appropriate.”
That distinction matters.
Correctness earns reliability. Judgment earns trust.
And judgment requires someone to watch real people navigate real ambiguity.
2026 Isn’t Just the Trust Era — It’s the Accountability Era
There’s a growing consensus that after the AI feature explosion of 2025, 2026 will be about trust. I think that’s directionally right — but incomplete.
What users are really asking for now is accountability they can feel.
In research sessions with AI-powered products, I hear versions of the same question:
“Who’s responsible for this decision?”
Not in a legal sense. In a human one.
People want to know:
- Can I understand why this happened?
- Can I intervene if it feels wrong?
- Will someone notice if this causes harm?
A 2024 Pew study found that 52% of users were uncomfortable with AI making decisions without human oversight, even when accuracy was high. Accuracy alone doesn’t settle people.
What does?
Clear signals of care.
That might look like:
- Explicit review moments instead of silent automation
- Language that explains trade-offs, not just outcomes
- Visible paths to recourse
These aren’t “UX flourishes.” They’re psychological anchors.
And yes — they take time to design.
But they also reduce downstream costs: fewer escalations, fewer reversals, fewer long-tail trust failures that no growth chart prepares you for.
The Quiet Pattern Connecting All of This
When I step back from this week’s conversations — about margins, validation, testing, AI, speed — I see a shared assumption:
That the fastest path to success is removing friction.
But not all friction is waste.
Some friction is orientation. Some is reassurance. Some is the space people need to decide they’re safe enough to proceed.
In session after session, the moments that matter most are not where people struggle — but where they pause.
Those pauses are data.
They tell us:
- Where responsibility feels unclear
- Where stakes feel higher than we acknowledged
- Where the product moved faster than trust could follow
If you’re building or leading right now, a few questions I’ve found grounding:
- Where do users slow down even when things are working?
- What decisions are we making silently on their behalf?
- What would it cost us not to make accountability visible?
Coming Back to That Pause
I keep thinking about that founder on the permissions screen.
After a moment, they clicked through. The task completed. The product did what it was supposed to do.
But something shifted.
They turned to me and said, “I think we’ve been measuring the wrong kind of success here.”
Not because conversion dropped. Not because revenue stalled.
Because they finally felt — in their own body — what it’s like to be asked to trust something without being fully met.
That feeling is expensive.
It shows up later as churn, support load, cautious adoption, margin pressure that no amount of optimization quite fixes.
The work that protects us from it rarely looks urgent. It doesn’t always fit neatly on roadmaps. It often sounds like slowing down when everyone wants to speed up.
But it’s some of the most economically honest work we can do.
Because when people feel settled, they don’t just stay longer.
They stop bracing.
And everything gets lighter from there.
Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.