When Everything Is Optimized — and Nothing Feels Better
We’re optimizing everything — onboarding, metrics, research, even our mornings. But when products get faster and dashboards turn green, why do users sometimes feel less confident? A closer look at the gap between efficiency and trust.
Last week, I sat in on two very different conversations.
In the first, a growth team was celebrating. Their dashboard was a sea of green: activation up 12%, onboarding time down from 21 days to 8, support tickets reduced by more than half. The energy in the room was earned.
In the second, a customer advisory call, one of their most tenured users said something that didn’t show up anywhere on those dashboards: “It’s faster now. I just don’t feel as confident using it.”
No anger. No dramatic churn threat. Just a quiet erosion of trust.
If you’ve been following the recent conversations in product and research circles, you can feel this tension building. We’re talking about the 95% of products that fail because they solve the wrong problems. We’re debating metrics that “smile” while users leave. We’re experimenting with AI running research, automating onboarding, building dashboards that generate insights in seconds.
We are optimizing everything.
And yet, in room after room, I’m seeing the same unease: the numbers look better, but the product doesn’t necessarily feel better.
That gap is where a lot of important product work now lives.
The Seduction of Visible Progress
Optimization is intoxicating because it’s measurable.
- Onboarding reduced from 21 days to 8.
- Support tickets down 56%.
- Activation up double digits.
- Response times under 3 seconds.
These are real wins. I’ve driven some of them myself. They matter — especially in SaaS, where small efficiency gains compound.
But optimization has a subtle bias: it privileges what can be counted over what must be interpreted.
When I look at post-mortems for struggling products, I rarely see teams that ignored metrics. I see teams that over-indexed on the wrong ones.
A study from CB Insights found that 35% of startups fail because there’s no market need. That statistic gets quoted often. What’s less discussed is how teams convince themselves there is demand. Early traction metrics. Positive usability tests. Feature usage heatmaps.
All technically true.
But optimization metrics answer a narrower question:
“How efficiently are we executing this model?”
They don’t automatically answer:
“Is this the right model for this human problem?”
That’s a different layer of judgment.
And it’s harder.
Efficiency vs. Confidence
In that advisory call I mentioned, the product team had automated significant parts of onboarding with AI. The data was impressive:
- Onboarding time reduced by ~60%
- Customer satisfaction surveys up 41%
- Fewer support tickets per account
But when we dug into qualitative interviews, a different story emerged.
Users were moving faster — but relying more on guesswork. They were completing setup flows without fully understanding key configuration decisions. The AI filled in defaults. The system “helped.” The friction was gone.
So was the reflection.
One operations lead told us:
“Before, it took longer. But I understood what I was setting up. Now it just… happens.”
From a metric standpoint, this is a win. From a product maturity standpoint, it’s more complex.
Efficiency improves short-term experience. Confidence determines long-term retention.
And confidence is harder to measure.
It shows up in subtle behaviors:
- Do users explore advanced features without prompting?
- Do they advocate for the tool internally?
- Do they recover gracefully from errors — or panic?
- Do they trust the system when it behaves unexpectedly?
You won’t find these signals neatly packaged in a dashboard prompt.
They live in conversations. In renewal calls. In the tone of support tickets.
The Optimization Mindset Is Expanding Beyond Product
What’s fascinating is how this same pattern shows up in how we work.
I’ve seen a wave of posts recently about eliminating morning habits that “kill your coding brain.” Hyper-optimizing focus. Automating research with AI. Building infrastructure that orchestrates dozens of models seamlessly.
We’re optimizing the product. We’re optimizing the workflow. We’re optimizing the team.
None of this is inherently wrong. I care deeply about disciplined execution.
But there’s a tradeoff we don’t always acknowledge: optimization compresses space.
It removes slack. It eliminates pauses. It reduces redundancy. It streamlines friction.
And sometimes, in doing so, it eliminates the very moments where insight forms.
One of the best researchers I’ve worked with builds deliberate “inefficiency” into her process. She leaves time between interviews to write raw impressions before reviewing transcripts. She avoids AI summaries until she’s formed her own point of view.
When I asked why, she said:
“If I let the tool decide what’s important too early, I lose the tension.”
That tension — between what users say and what they mean, between what metrics show and what behaviors suggest — is where judgment develops.
Optimization flattens tension.
Judgment requires it.
When Green Metrics Mask Strategic Drift
A few years ago, I worked with a B2B SaaS company whose retention was stable, NPS was decent, and feature adoption was climbing.
On paper, they were healthy.
But something was off.
Sales cycles were getting longer. Champions were harder to identify. Expansion revenue was stalling.
Nothing alarming in isolation. But collectively, it hinted at drift.
We went back to first principles: not “How do we improve onboarding?” but “What job is this product hired to do?”
Through a series of qualitative interviews (no automation, just conversations), we discovered something subtle: the product had evolved from a decision-support tool into a reporting archive. Teams used it to document decisions, not make them.
Usage metrics were strong because documentation is habitual.
But strategic value — the reason it was originally purchased — had quietly eroded.
No metric screamed.
But the product had shifted categories without the team realizing it.
This is what I mean by strategic drift under optimization.
When we continuously improve what exists, we can become less sensitive to whether it still matters.
Optimization is local. Strategy is directional.
You can move quickly in the wrong direction and look excellent doing it.
What to Protect in an Optimization-Obsessed Environment
I’m not advocating for slowness. Or romanticizing friction.
I’m advocating for protecting three specific things that don’t naturally survive aggressive optimization.
1. Interpretive Research
AI can cluster feedback. It can summarize transcripts. It can generate sentiment analysis at scale.
It cannot yet reliably surface:
- Ambivalence
- Politeness masking doubt
- Contradictions users haven’t noticed
- Emotional undertones that shape trust
Use automation to accelerate synthesis — but not to replace first-pass human interpretation.
As a rule of thumb: don’t outsource your first impression.
2. Leading Indicators of Trust
Most dashboards track lagging indicators: churn, retention, NPS.
Add measures that capture user confidence before churn happens:
- Percentage of accounts using advanced features voluntarily
- Time-to-first self-initiated workflow (not prompted)
- Depth of usage across teams (single-threaded vs. distributed)
- Renewal conversations initiated by customers vs. sales
These require cross-functional effort to track. They’re messier.
They’re also far more predictive of durable product-market fit.
3. Strategic Pauses
High-performing teams are disciplined about delivery.
Fewer are disciplined about reflection.
Build explicit checkpoints into your roadmap where the question is not “Are we on track?” but:
- What assumptions are we still carrying from six months ago?
- What has changed in our users’ environment?
- If we were starting today, would we design the same core flow?
This is not about dramatic pivots.
It’s about recalibrating before drift compounds.
The Human Cost of Relentless Optimization
There’s a quieter layer to all of this.
When we build products that constantly optimize, predict, automate, and streamline, we’re not just shaping metrics. We’re shaping how people feel at work.
Do they feel capable? Do they feel replaced? Do they feel guided — or overridden?
“SaaS is no longer human-only” is a structural shift. AI agents, autonomous workflows, auto-generated dashboards — these are powerful tools.
But when everything becomes automatic, users lose opportunities to build mastery.
And mastery is sticky.
Research in behavioral psychology consistently shows that people value tools more when they’ve invested effort into using them effectively. The “IKEA effect” isn’t just about furniture — it’s about ownership.
If your product removes all effort, it may also remove attachment.
That doesn’t mean we should reintroduce unnecessary friction.
It means we should be intentional about where users remain in control — where they make meaningful decisions rather than passively accepting defaults.
Because long-term loyalty is not built on speed alone.
It’s built on agency.
The Deeper Question
The conversations happening right now — about failing products, misleading metrics, AI research, automated onboarding — are not separate threads.
They’re all circling the same underlying tension:
Are we optimizing for performance, or for partnership?
Performance is about speed, efficiency, output.
Partnership is about trust, clarity, shared understanding.
The strongest products do both. But they don’t confuse one for the other.
When I think back to that advisory call, what stayed with me wasn’t the dashboard. It was the tone in that user’s voice.
“It’s faster now. I just don’t feel as confident.”
That sentence is easy to dismiss. It doesn’t tank your quarterly metrics.
But over time, those quiet sentences accumulate.
And when users leave, they rarely cite the optimization wins.
They cite the moment they stopped feeling sure.
As product leaders, our job isn’t just to make systems more efficient.
It’s to make people more capable.
Optimization can get you to green.
Judgment — and care — are what keep you there.
Jordan helps product teams navigate complexity and make better decisions. She's fascinated by how teams balance user needs, business goals, and technical constraints.