You Fixed the Interface. The Work Still Didn’t Get Easier.
Why passing usability tests isn’t the same as making work easier — and what today’s product conversations reveal about the gap we keep missing.
The Moment Everyone Nods — and Something Slips By
Last week, I sat in on a usability review where the room felt unusually calm. The latest design iteration tested well. Fewer errors. Faster task completion. Higher satisfaction scores across the board.
Someone said, almost with relief, “I think we’ve nailed it.” Heads nodded. The conversation started drifting toward timelines and rollout plans.
But I couldn’t shake a small detail from one of the sessions. A participant completed the task successfully — quickly, even — and then muttered, almost to themselves: “I still wouldn’t want to do this every day.”
No one asked a follow-up. The metric said success. The UI worked. The test passed.
That gap — between something working and something actually fitting into a person’s life — is at the center of a lot of conversations I’m seeing right now in the product design and research community. And it’s not a tooling problem. It’s a judgment problem.
We’re getting very good at fixing interfaces. We’re still struggling to understand work.
When Usability Stands In for Understanding
One pattern keeps repeating in recent discussions: we talk about usability testing as if it were the same thing as understanding users.
It isn’t.
Usability testing answers questions like:
- Can someone complete this task?
- Where do they hesitate?
- What errors do they make?
These are important questions. They protect us from obvious failure. They help us remove unnecessary friction.
But user understanding asks a different set of questions:
- Why does this task exist in their world?
- What tradeoffs does it force them to make?
- What emotional or cognitive cost does it carry over time?
In one enterprise product I worked on, we ran extensive usability tests on a new workflow. Task completion rates were above 90%. Error rates dropped by nearly 40% compared to the previous version.
Three months after launch, adoption stalled.
When we went back into the field, we learned the issue wasn’t usability at all. The workflow was technically smooth — but it required users to front-load decisions they were used to making gradually. It compressed judgment into a single moment. People could do it. They just didn’t want to.
A usable product can still demand the wrong kind of work.
This is where many teams — especially fast-moving ones — get misled. The interface improves, but the experience doesn’t.
Feedback Is Not the Same as Evidence
Another trend I’m noticing: a renewed emphasis on “listening to users,” often framed around feedback loops, comment widgets, or community posts.
This matters. But it’s also where things quietly go wrong.
User feedback is abundant. Understanding is scarce.
I recently advised a small team building a free online tools platform. They were doing many things right: shipping quickly, responding to comments, iterating based on requests. Within weeks, they had hundreds of pieces of feedback.
The challenge wasn’t volume. It was interpretation.
Most of the feedback clustered around surface-level requests:
- “Can you add an export button?”
- “This should integrate with X.”
- “Make this one step shorter.”
They implemented several of these. Engagement ticked up briefly — then flattened.
When we mapped feedback against actual usage patterns, something interesting emerged. A small percentage of users accounted for the majority of requests, while a much larger group used the tool sporadically and never commented at all.
This isn’t unusual. According to Nielsen Norman Group, only about 1% of users typically provide active feedback, while the vast majority remain silent observers.
The risk is subtle but real:
- Feedback reflects who is loud, not who is struggling
- Requests reflect solutions, not underlying needs
- Volume creates confidence without clarity
Listening harder doesn’t help if you’re listening at the wrong level.
As product leaders, our job isn’t to collect more opinions. It’s to make sense of incomplete signals.
Friction Isn’t the Enemy — Misplaced Friction Is
Several conversations right now are focused on frictionless experiences: automated access, invisible systems, fewer steps, faster flows.
I understand the appeal. Reducing unnecessary effort is good work.
But friction is not inherently bad. In fact, some friction is doing important labor — helping people orient, decide, and feel in control.
I saw this clearly on a B2B product where leadership pushed for a “one-click” setup flow. The original onboarding took about 12 minutes. The new version took under 3.
Success metrics looked great initially:
- Setup completion increased by 25%
- Drop-off decreased significantly
Support tickets, however, spiked two weeks later.
Users had moved through onboarding so quickly that they hadn’t formed a mental model of the system. When something went wrong — and something always does — they didn’t know where to look or what mattered.
We had removed friction that was quietly doing educational work.
A McKinsey study on digital adoption found that tools optimized purely for speed often see higher long-term support costs, because users lack confidence and context.
The lesson wasn’t “add friction back everywhere.” It was this:
- Identify which moments are about efficiency
- Identify which moments are about understanding
- Design them differently
When we treat all friction as failure, we flatten the experience — and people feel it, even if they can’t articulate why.
The Research Questions We’re Not Asking
Across many of these discussions — usability vs. user testing, feedback loops, frictionless systems — I see a deeper pattern.
We are excellent at asking questions that confirm progress. We are less practiced at asking questions that might complicate our plans.
In my own work, the most valuable research moments often come from questions that feel slightly uncomfortable:
- If this feature disappeared tomorrow, who would actually notice?
- What work does this product create that no one planned for?
- Where are users compensating for our design without telling us?
One of the most revealing exercises I’ve used with teams is mapping the “shadow work” around a product — the spreadsheets, Slack messages, reminders, and workarounds people maintain to make the tool usable in real life.
In one organization, we discovered that despite a robust dashboard, managers were still maintaining parallel tracking documents. Not because the dashboard was bad — but because it didn’t support the conversations they needed to have with their teams.
The product was optimized for reporting. The work required sensemaking.
No usability test would have caught that.
What This Means for Product Decisions
If there’s a practical takeaway from all of this, it’s not a new method or framework. It’s a shift in how we judge progress.
Here’s what I’ve learned to look for instead of — or alongside — traditional success signals:
- Ease over time, not just ease in a session
- Confidence, not just completion
- Fewer compensating behaviors, not just fewer clicks
- Clearer conversations, not just cleaner screens
As product managers and designers, we sit at the intersection of constraints, incentives, and human behavior. Our job isn’t to eliminate complexity. It’s to decide where complexity belongs.
That requires slowing down at moments when the data says “go.” It requires staying curious when the UI looks done. It requires caring about the parts of the experience that don’t show up neatly on a roadmap.
The Quiet Standard We’re Really Being Held To
When people say, “This product just feels heavy,” they’re rarely talking about the interface alone.
They’re talking about how the product fits — or doesn’t — into their thinking, their rhythms, their responsibilities.
The conversations I’m seeing right now tell me something important: our tools are improving faster than our collective ability to judge their impact.
Fixing the UI is visible work. Understanding the user’s world is slower, quieter, and harder to celebrate.
But it’s also where durable products come from.
The goal isn’t to pass the test. It’s to make the work genuinely easier — not just faster, but lighter.
And that’s a standard no metric can fully capture. It has to be practiced, noticed, and protected — by people who care enough to look past the nodding heads and ask one more question.
Jordan helps product teams navigate complexity and make better decisions. She's fascinated by how teams balance user needs, business goals, and technical constraints.