The Feedback We’re Designing For Is Quietly Changing
Back to Blog

The Feedback We’re Designing For Is Quietly Changing

As products grow more powerful, the feedback people give us is getting quieter — and more human. What that means for how we listen, design, and lead.

Jade LiangJade Liang
8 min read

The Moment That Made Me Pause

Last Tuesday, I was on a call with a customer who’s been with us for almost three years. They’re thoughtful, articulate, and usually very clear about what’s working and what isn’t. But this time, when I asked a familiar question — “How does this new feature feel in your day-to-day?” — there was a long pause.

Not the distracted kind. The searching kind.

Finally, they said, “I don’t know how to answer that anymore. It does what it’s supposed to. I just don’t know if it’s helping me.”

That sentence has been echoing for me as I’ve followed the conversations happening across product design and research this week. Emerging technologies. Accessibility as quality. Research models breaking under their own weight. On the surface, these feel like separate threads. But from where I sit — in Customer Success, listening to people explain their work in the gaps between metrics — they’re all pointing to the same tension.

The feedback we’re getting is changing because the relationship between people and products is changing.

And many of our systems for listening haven’t caught up.

From Builders to Stewards — and What That Changes About Listening

There’s been a lot of thoughtful writing lately about product leaders becoming stewards of impact, not just builders of features. I agree with that framing — but I think it misses a quieter implication.

When your role shifts from building to stewarding, feedback stops being validation and starts being responsibility.

In practice, that means a few uncomfortable things:

  • You can’t rely solely on what’s easiest to measure
  • You have to listen for second-order effects, not just first reactions
  • And you have to take seriously the voices that are hardest to hear

I see this most clearly when teams ship AI-powered features. The early feedback often looks great: faster task completion, fewer clicks, higher engagement. But a few weeks later, the tone of customer conversations changes.

People say things like:

  • “I’m not sure why it did that, but I worked around it.”
  • “It’s faster, but I feel less confident.”
  • “I don’t know when to trust it — or myself.”

These aren’t feature requests. They’re signals of shifting agency.

Stewardship isn’t about predicting every consequence. It’s about building the capacity to notice when the relationship is changing.

From a Customer Success perspective, this is where feedback collection either deepens understanding — or collapses into noise. If your systems only ask “Did this work?” you’ll miss the more important question: “What did this change for you?”

Accessibility Isn’t a Checklist — It’s a Signal

One of the most encouraging trends I’ve seen is the reframing of accessibility as a product quality signal, not a compliance requirement. But I want to push that a bit further.

Accessibility is also a feedback quality signal.

Here’s what I mean.

When a product truly works for people using assistive technologies, those users often give some of the most precise, actionable feedback I’ve ever encountered. Not because they’re more critical — but because friction is impossible to ignore.

In one case, a customer who relies on a screen reader flagged an issue in our onboarding flow that sighted users had been unconsciously compensating for. Our metrics showed a healthy completion rate. Our surveys showed “satisfied.” But their experience revealed:

  • A cognitive loop caused by unclear hierarchy
  • Repeated context switching that increased mental load
  • And a trust gap created by inconsistent feedback from the system

Fixing that issue didn’t just improve accessibility. It improved clarity for everyone.

There’s data to back this up. A 2023 study by Forrester found that inclusive design improvements led to usability gains for up to 80% of users, not just those with disabilities. Accessibility surfaces edge cases early — and edge cases are where products usually break later.

From a feedback perspective, teams that treat accessibility as foundational tend to:

  • Ask more specific questions
  • Pay attention to how people explain effort, not just outcomes
  • And value qualitative nuance over tidy averages

That’s not a moral stance. It’s a listening advantage.

Why So Many Teams Are Rethinking Research — and What They’re Missing

Another thread running through recent discussions is the sense that traditional research models are collapsing. One widely shared stat claims that 72% of product teams are rethinking their user interview approach heading into 2026.

I understand why.

Research is expensive. Timelines are tight. Tools promise speed and scale. And yet, despite more data than ever, teams often feel less certain.

This is where I see a pattern repeating — especially when survey data becomes the primary input for decisions.

The Hidden Trap of Clean Answers

Surveys aren’t bad. But they’re often treated as neutral when they’re anything but.

I’ve watched teams confidently act on survey results that said things like:

  • “Users want more customization”
  • “People find the interface intuitive”
  • “The feature meets expectations”

Only to discover later that:

  • Customization meant one specific workflow
  • “Intuitive” meant familiar from another tool
  • And “meets expectations” meant good enough to avoid switching

According to a 2022 Pew Research study, nearly 40% of survey respondents admit to choosing answers that feel easiest or most socially acceptable, especially when questions are abstract or repetitive.

That’s not dishonesty. It’s human behavior.

When research becomes faster than sensemaking, we don’t get clarity — we get confidence without understanding.

Customer conversations often reveal the gap later, when someone says, “I said it was fine because I didn’t know how to explain what wasn’t.”

What I’ve Learned About Collecting Feedback That Actually Helps

After years of sitting between product teams and customers, I’ve come to believe that how you collect feedback matters more than how much you collect.

Here are a few practices that have consistently led to better decisions — not just more data.

1. Ask About Effort Before Outcome

Instead of starting with satisfaction or success, start with effort.

  • What felt harder than you expected?
  • Where did you slow down?
  • What did you have to figure out on your own?

Effort reveals design debt long before churn does.

2. Create Space for “I’m Not Sure”

Some of the most important insights arrive wrapped in uncertainty.

Make it safe — explicitly — for people to say:

  • “I don’t know yet”
  • “It works, but…”
  • “I can’t tell if this is better”

Those responses often signal transitions in trust or understanding.

3. Treat Accessibility Feedback as Early Warning

Don’t silo accessibility issues. Track them alongside core usability signals.

Patterns here often predict broader problems:

  • Confusing hierarchy
  • Inconsistent system feedback
  • Hidden assumptions about user context

4. Close the Loop — Even When the Answer Is No

Stewardship shows up in follow-through.

When people see that their feedback led to:

  • A change
  • A clear decision
  • Or even a thoughtful rejection

Trust deepens. And future feedback gets better.

The Deeper Shift I Think We’re Actually Experiencing

When I zoom out from all these conversations — emerging tech, accessibility, research models, AI — I don’t see a crisis of tools. I see a crisis of interpretation.

Our products are doing more. Faster. With less visible effort. And as that happens, the signals people give us become quieter, more ambiguous, and more relational.

They’re not just telling us what worked.

They’re telling us:

  • Whether they still feel competent
  • Whether they trust the system — and themselves
  • Whether the product fits into their life, not just their workflow

That’s harder to capture in a dashboard. But it’s easier to hear when you slow down enough to listen.

As a Customer Success Lead, I’ve learned that the most valuable feedback rarely arrives labeled as such. It shows up in pauses. In hedged language. In stories people tell when they think you’re genuinely curious.

The future of product quality won’t be decided by how quickly we ship — but by how carefully we listen when people don’t quite know what to say.

If we can design our feedback systems to honor that uncertainty — to treat it as insight, not friction — we won’t just build better products.

We’ll build better relationships.

And in an age of increasingly powerful technology, that might be the most important work we do.

Jade Liang
Jade Liang
Customer Succes Lead

Jade leads all the Customer Success initiatives at Round Two. She is passionate about understanding the needs people have and how product collection tools like Round Two can help to generate more helpful insights.

TOPICS

User ResearchProduct DesignCustomer SuccessProduct ManagementAccessibility

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.