When Optimization Breaks the Thing It Was Meant to Improve
Back to Blog

When Optimization Breaks the Thing It Was Meant to Improve

Optimization feels responsible. But when we optimize parts of a system without understanding the whole, we risk breaking what users quietly rely on most.

Maya ChenMaya Chen
8 min read

A few days ago, I watched a participant struggle with what should have been a simple workflow.

On paper, it was "optimized." Fewer steps. Cleaner UI. Smart defaults based on previous behavior. We had even A/B tested the microcopy.

And yet, halfway through the task, she stopped. Not confused exactly — just uneasy.

She said something I’ve been thinking about ever since: “I feel like it’s doing something in the background that I don’t fully understand.”

At almost the same time, I was reading about a Linux kernel optimization that unintentionally introduced a QUIC bug — a technical improvement in how “idle” processes were handled that quietly disrupted something built on top of it. The system behaved differently under specific conditions because what counted as “idle” wasn’t truly idle.

Two completely different domains. One common pattern.

We are very good at optimizing parts of systems. We are much less good at noticing when those optimizations distort the whole.

And in product work right now — especially in SaaS and growth-driven environments — I see this tension everywhere.

The Seduction of Local Wins

Optimization feels responsible. Disciplined. Mature.

In startups especially, the pressure is constant:

  • Improve activation rates
  • Increase organic traffic
  • Reduce drop-off in onboarding
  • Streamline flows
  • Automate more of the experience

Every metric has a story attached to it. Every improvement feels like progress.

But there’s a psychological trap here that behavioral research has documented for decades: we overvalue improvements that are visible and measurable, even when they degrade less measurable dimensions of trust, clarity, or coherence.

In one well-known study on goal fixation, participants continued optimizing toward a defined metric even after it stopped representing the larger objective. They weren’t irrational. They were focused.

That’s what local optimization does.

It narrows our field of view.

In product teams, this often shows up as:

  • Reducing steps without reducing cognitive load
  • Personalizing experiences without increasing clarity
  • Generating traffic without strengthening intent
  • Automating decisions that users still want agency over

We celebrate movement in dashboards. Meanwhile, the experience shifts in subtle ways.

And subtle shifts compound.

When “Idle” Isn’t Idle

The kernel bug story struck me because of one idea: something classified as “idle” wasn’t truly inactive in the way the system assumed.

In research sessions, I see the same misclassification happen with human behavior.

A user pauses.

We label it hesitation.

But sometimes it’s:

  • Caution
  • Double-checking
  • Trying to retain control
  • Assessing risk
  • Emotional discomfort

When teams optimize to remove those pauses entirely, we may remove something essential.

There’s a growing body of research in behavioral economics showing that perceived control increases satisfaction even when objective efficiency decreases. In one study, participants preferred systems that allowed them optional review steps, even if skipping those steps would have been faster.

Speed is not always the same as comfort.

I worked with a B2B SaaS team last year that proudly reduced onboarding time from 14 minutes to 6. A huge win.

But three months later, churn had increased among the same segment whose onboarding had been shortened.

In interviews, customers said variations of the same thing:

“I got in faster, but I didn’t feel grounded in how it worked.”

We had optimized the entry point. We had unintentionally weakened orientation.

The system wasn’t broken.

But the experience had shifted in a way the dashboard couldn’t see.

When Research Tells Us What We Want to Hear

Another conversation circulating this week: how user research sometimes confirms our assumptions rather than challenges them.

After 20+ years in UX, I can say this gently but clearly: research does not automatically protect us from self-deception.

In fact, research can become another optimization layer.

We design studies to:

  • Validate feature desirability
  • Confirm pricing tolerance
  • Refine messaging

But we rarely design studies to ask:

  • What might this break?
  • What will this make harder?
  • What are we misclassifying as "idle," "friction," or "drop-off"?

There’s a reason 70% of digital transformation initiatives fail to meet their stated goals (McKinsey). It’s rarely because teams weren’t optimizing hard enough.

It’s often because they optimized components without understanding system dynamics.

And systems include human emotion, trust, habit, and meaning.

In sessions, I look for moments where participants subtly correct the product.

They rename features out loud. They create workarounds. They open other tabs “just in case.”

Those behaviors are the experiential equivalent of background processes.

They tell us something about how the system is actually functioning.

If we optimize without noticing them, we risk introducing our own quiet bugs.

The SEO Story We Keep Repeating

I’ve also been watching a wave of SaaS founders talking about SEO — indexing thousands of pages, then deleting them; hiring agencies; building content stacks.

Traffic is optimized. Structure is optimized. Visibility is optimized.

And then someone writes: “I built 9,000 pages and deleted them all.”

This is the same pattern at a different layer.

We optimize discoverability before we’ve stabilized meaning.

From a psychological perspective, there’s a powerful reinforcement loop here:

  • Traffic increases quickly
  • Dashboards show growth
  • Stakeholders feel momentum

But meaning compounds slowly.

Retention, brand trust, and product coherence lag behind surface metrics. According to ProfitWell, a 5% increase in retention can increase profits by 25–95%. Yet most early-stage teams spend disproportionately more energy on acquisition than retention.

Why?

Because acquisition is legible. Retention is relational.

One gives you charts. The other gives you conversations.

Optimization favors what is easy to visualize.

But systems break where relationships thin out.

The Psychological Cost of Over-Optimization

Here’s what I’ve come to believe after years of sitting in research rooms:

When we over-optimize products, users feel it.

Not as a technical bug. But as a subtle erosion of trust.

It sounds like:

  • “It feels like it’s trying to push me.”
  • “Why is it assuming that?”
  • “I don’t know what just happened.”

None of these show up cleanly in funnel analytics.

But they show up in tone.

In longitudinal research, I’ve seen something consistent: people tolerate friction more easily than they tolerate misalignment.

They will forgive:

  • A few extra clicks
  • Slight delays
  • Imperfect layouts

They struggle with:

  • Hidden automation
  • Premature assumptions
  • Decisions made on their behalf without explanation

When we optimize aggressively for speed or scale, we sometimes reduce the space where alignment can form.

And alignment is slower than optimization.

It requires:

  1. Letting users orient themselves
  2. Preserving optional control
  3. Making system behavior legible
  4. Testing edge cases, not just happy paths

In technical systems, engineers test edge cases because that’s where failures emerge.

In product systems, edge cases are often human:

  • The cautious buyer
  • The distracted parent
  • The overwhelmed founder
  • The skeptical enterprise stakeholder

If we optimize for the median user and ignore these edges, we build something statistically elegant and emotionally brittle.

A Different Question to Ask

Optimization isn’t the enemy.

But the question we attach to it matters.

Instead of asking:

  • How do we reduce this friction?
  • How do we increase this metric?

I’ve started asking teams a different question:

If we succeed at optimizing this, what might we quietly destabilize?

It shifts the room.

Because now we’re thinking systemically.

We begin to look for:

  • Behavioral side effects
  • Emotional trade-offs
  • Downstream complexity
  • Hidden assumptions

And sometimes the answer is reassuring.

Other times, it reveals that what we labeled as “idle” — that pause, that extra click, that manual review step — was actually structural support.

In architecture, removing a beam can make a room feel more open.

Until the ceiling cracks.

Staying Honest in an Optimization Culture

We work in an environment that rewards measurable improvement.

That won’t change.

But as researchers and designers, we can introduce a counterweight:

  • Pair every optimization metric with a trust metric
  • Study what people do after the improved step
  • Look for increased workaround behavior
  • Track qualitative sentiment shifts over time

Most importantly, sit with users long enough to notice when something feels slightly off.

Those moments are rarely dramatic.

They’re small hesitations. A softened tone. A quiet “huh.”

But that’s often where the real system is speaking.

The Linux bug wasn’t caused by incompetence. It was caused by a well-intentioned optimization interacting with complexity.

Our products live inside even more complex systems — human ones.

Which means humility isn’t optional.

It’s structural integrity.

The next time a metric improves cleanly, I hope we pause just long enough to ask what else might have shifted.

Not because optimization is wrong.

But because systems — especially human systems — rarely reward narrow thinking forever.

And the most resilient products I’ve studied over the years weren’t the most aggressively optimized.

They were the ones that understood what not to simplify.

That’s a different kind of discipline.

Quieter. Harder to celebrate.

But much harder to break.

Maya Chen
Maya Chen
Senior UX Researcher

Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.

TOPICS

User ResearchProduct DesignUX ResearchProduct StrategySystems Thinking

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.