Consistency Isn’t Trust: What Our Systems Teach People Over Time
Back to Blog

Consistency Isn’t Trust: What Our Systems Teach People Over Time

In a moment of hesitation during a usability test, I saw something our design systems rarely capture: the difference between consistency and confidence — and what our products quietly teach people over time.

Maya ChenMaya Chen
6 min read

The Moment I Noticed the Hesitation

In a usability session a few weeks ago, I watched a participant hover their cursor over a button they’d already used three times.

It was the same label. The same color. The same placement. And yet, they paused.

“I think this is right,” they said quietly, more to themselves than to us.

Nothing was wrong in the way our dashboards would later report. Task completion was fast. Errors were low. The design system was doing exactly what it was meant to do: enforcing consistency at scale. But in that pause — that small, human hesitation — I felt a tension I’ve been noticing more and more in our field.

We’re talking constantly about systems right now. Atomic design. Frameworks. Scalable components. Strategic UX. And on the surface, these conversations make sense. Products are more complex. Teams are larger. The cost of inconsistency is real.

But underneath those discussions, I’m sensing a quieter question trying to surface:

What do our systems actually teach people about how safe it is to act?

When Consistency Becomes a Substitute for Understanding

Atomic design is having a moment again, often framed as a solution to UX fragmentation. Break interfaces into atoms, molecules, organisms. Build upward. Maintain order.

In practice, this has helped teams reduce visual chaos. According to a 2023 Sparkbox survey, teams with mature design systems report 25–40% faster design-to-development cycles. That matters.

But speed and consistency don’t automatically create confidence.

In research, I see users quickly learn patterns — and then just as quickly learn where those patterns betray them. A button that looks primary but behaves differently in one flow. A form field that accepts free text here but enforces strict formatting elsewhere.

The issue isn’t inconsistency alone. It’s predictability of consequence.

People aren’t asking, “Is this component consistent?”

They’re asking, often subconsciously:

  • If I click this, will I regret it?
  • Will I be able to undo it?
  • Will I feel foolish if it does something unexpected?

This is where recent discussions about the UX of regret resonate deeply. Funnel metrics can tell us where people drop off. They can’t tell us how often users proceed while bracing for impact.

Hesitation is often the first signal of eroding trust — long before abandonment shows up in the data.

Quiet Seasons and the Work That Actually Endures

I was struck by the recent framing of the so-called “crypto winter” as a product season. Strip away the hype cycles, and what remained was infrastructure work. UX clean-up. Builder discipline.

That pattern isn’t unique to crypto.

In calmer periods — fewer launches, fewer rebrands, fewer “big bets” — teams tend to notice things like:

  • How many edge cases we quietly patched over
  • Where design systems solved alignment but masked confusion
  • Which flows rely on user forgiveness instead of clarity

One fintech team I worked with used their slower quarter to rewatch old research recordings. Not syntheses. Not highlights. Raw sessions.

They noticed something subtle: users were completing money transfers successfully, but narrating them defensively.

“I hope this went through.”

“I’m pretty sure that worked.”

Completion rates were above 90%. Confidence was not.

Research from the Baymard Institute suggests that nearly 18% of checkout abandonments happen because users don’t trust what will happen after the next click — not because they’re confused about how to proceed, but because they’re unsure of the outcome.

That distinction matters. And it’s rarely visible in component libraries or flow diagrams.

Teaching Teams to See the Human Signal

Another conversation resurfacing lately is about how we interview users — especially in groups. On paper, group interviews look efficient. In reality, they often smooth out the very friction we need to notice.

People self-edit in front of peers. They normalize uncertainty. They laugh off hesitation.

In one group session I observed, a participant admitted — almost as a joke — that they “just click things until it works.” Everyone nodded. No one lingered.

But that behavior is not neutral. It’s adaptive.

When interfaces repeatedly ask users to guess, people learn to minimize emotional investment. They stop forming expectations. They stop trusting signals.

This is where system thinking has to expand beyond components and into psychology.

Systems Teach Behavior

Every repeated interaction teaches users something:

  1. Whether the product will protect them from mistakes
  2. Whether signals are reliable across contexts
  3. Whether pausing is allowed — or punished

Design systems often excel at the first layer: visual consistency. They struggle with the latter two.

Because those live in decisions like:

  • When we allow undo versus confirmation
  • How we communicate irreversible actions
  • Whether error states explain why, not just what

These aren’t atomic decisions. They’re relational ones.

What I’m Learning to Watch For Now

Lately, in research sessions, I’m paying less attention to where users succeed — and more to how they move while succeeding.

Some patterns I’ve learned to take seriously:

  • Verbal hedging: “I think,” “maybe,” “I guess this is right”
  • Redundant checking: opening the same screen twice to confirm
  • Premature exits: closing modals before reading, then reopening

None of these show up as failures.

But collectively, they tell a story about cognitive load and emotional risk.

From a practical standpoint, this has changed how I advocate inside teams:

  • I push for research clips that include hesitation, not just breakdowns
  • I frame consistency conversations around consequence, not components
  • I ask designers to map where regret is possible, not just where actions occur

A product doesn’t earn trust by being consistent. It earns trust by being consistently understandable when things go wrong.

Designing for the Person Who’s Not Sure

There’s a reason so many people in UX research come from — or are drawn to — psychology. We spend our days sitting with uncertainty. Watching people try to articulate half-formed thoughts. Respecting pauses.

As our tools, systems, and organizations scale, the risk isn’t that we’ll forget usability principles.

It’s that we’ll forget what it feels like to be unsure.

The participant hovering over that button wasn’t confused. She was cautious. She’d learned, somewhere along the way, that similar-looking actions sometimes carried very different consequences.

Our system had trained her well.

As designers and researchers, we’re always teaching. The question is whether we’re teaching people to explore — or to brace themselves.

The conversations happening right now — about atomic design, about quiet product seasons, about regret and confidence — all point to the same deeper responsibility.

Not just to make things work.

But to make it clear, over time, when it’s safe to trust them.

Maya Chen
Maya Chen
Senior UX Researcher

Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.

TOPICS

User ResearchProduct DesignUX ResearchDesign SystemsHuman Behavior

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.