Who We Think We’re Designing For — and Who Shows Up Instead
Back to Blog

Who We Think We’re Designing For — and Who Shows Up Instead

Across today’s product conversations, a quiet pattern is emerging: we know more about users than ever, yet feel less certain about who we’re really designing for. This piece explores the human gap between data, assumptions, and the people who actually show up.

Maya ChenMaya Chen
6 min read

The Moment That Keeps Repeating

Last week, during a usability session for a financial planning tool, a participant laughed — a small, surprised laugh — when I asked who they thought the product was for.

“I guess… people who are better at this than me,” they said. Then they added, quietly, “People who don’t get tired.”

We’d recruited exactly to spec. Age, income, job role — all aligned with the persona deck. On paper, this person was the user. But in that moment, there was a visible gap between who the product imagined and who was actually sitting in front of us.

I’ve been noticing that gap everywhere lately — in Medium essays about audience research, in debates about being more “data-driven,” in conversations about whether UX is even the right frame anymore. Different topics, same underlying tension.

We’re surrounded by more signals about users than ever. And yet, many teams feel less certain about who they’re really building for.

When “Knowing the User” Becomes a Checklist

A lot of the current discourse circles around doing more research: better audience definitions, sharper segments, richer dashboards. None of this is wrong. But I worry about how often research becomes a substitute for relationship.

In one organization I worked with last year, the research repository was immaculate. Hundreds of tagged insights. Clear behavioral segments. регуляр surveys with statistically significant samples. And still, product decisions stalled.

When I asked why, a PM said something that stuck with me:

“We know everything about our users. We just don’t know which version of them to believe.”

That’s the quiet problem underneath many of these conversations. Not lack of data — but lack of confidence in judgment.

Some telling patterns I’m seeing across teams:

  • Audience research framed as certainty rather than orientation
  • Personas treated as alignment tools instead of hypotheses
  • Metrics used to end debates instead of deepen them

According to a 2023 Nielsen Norman Group report, teams that rely primarily on quantitative analytics are 34% more likely to report misalignment between product strategy and user needs over time. Not because the data is wrong — but because it’s incomplete.

People are variable. Contextual. Sometimes contradictory. When our tools promise clean answers, we start distrusting the messiness that actually matters.

The Subtle Shift in How We Treat Data

One of the most interesting threads I’ve seen lately is about a “subtle habit” that changes how product managers see data. I think that habit is this: treating data as something to sit with, not something to act on immediately.

In research sessions, I see this play out in micro-moments.

A participant hesitates before clicking.

A PM glances at the task timer.

The number says: they’re slow. The human signal says: they’re unsure — and that uncertainty matters.

Behavioral psychology has long shown us that hesitation is often where meaning lives. Daniel Kahneman’s work on System 1 and System 2 thinking reminds us that pauses indicate cognitive load — moments when people are forced out of intuitive flow and into effortful reasoning.

Yet our dashboards rarely capture:

  • The emotional cost of that effort
  • The erosion of confidence over repeated small frictions
  • The stories users tell themselves when they feel “behind” the product

A concrete example: In a B2B SaaS study I ran two years ago, task completion rates were above 90%. By traditional standards, the design was a success. But qualitative interviews revealed something else — users were completing tasks by memorizing steps, not by understanding the system.

Three months later, support tickets spiked. Not because the product changed — but because users hit a scenario that broke their memorized path.

The data wasn’t lying. It just wasn’t listening long enough.

Choice, Capacity, and the Users We Assume

Another thread running through recent writing is about choice — when it empowers, and when it quietly overwhelms.

I think this connects directly to how we imagine our users’ capacity.

Many products today are designed for what I’d call the idealized attentive user:

  • They read carefully
  • They remember past decisions
  • They have time to compare options
  • They feel confident navigating complexity

But real users show up tired. Distracted. Sometimes anxious. Often switching contexts every few minutes.

Research from Microsoft’s Human Factors group found that knowledge workers switch tasks every 2–3 minutes on average, and it takes up to 23 minutes to fully regain focus. When we design dense interfaces or expansive choice sets, we’re often asking for a kind of attention people simply don’t have.

This is where I see UX, product management, and even AI debates converging.

The question isn’t whether we can offer more features, more automation, more intelligence.

It’s whether we’re honest about:

  1. The cognitive load we’re introducing
  2. The emotional stories users tell themselves when they struggle
  3. Who benefits when the system works — and who feels left behind when it doesn’t

In that financial planning session, the participant wasn’t confused by the interface. They were intimidated by what it implied about them.

“I feel like I should already know this,” they said.

No metric captured that. But it shaped everything.

Practicing a More Human Kind of Clarity

So what do we do with all of this? Not more frameworks. Not louder opinions. But a slightly different posture toward our work.

Some practices that have helped me — quietly, consistently — over the years:

  • Name the user you’re excluding, not just the one you’re serving
  • Treat personas as questions, not answers
  • Spend time with moments of friction, even when success metrics look good
  • Ask what a design assumes about attention, confidence, and energy

One team I worked with began adding a single slide to every product review: “What this decision asks of the user.” Not benefits. Not outcomes. Demands.

It changed the conversation.

Not because it slowed them down — but because it reintroduced judgment as a shared responsibility.

Coming Back to the Person in the Room

I keep thinking about that participant who thought the product was for “people who don’t get tired.”

They didn’t churn that day. They completed every task. If we’d stopped at the metrics, we would’ve called it a win.

But a week later, in a follow-up interview, they said something else:

“I don’t think it’s for me long-term. It feels like it wants me to become someone else.”

That’s the risk when we design from abstractions instead of relationships. We build things that work — but only for a version of the user that doesn’t fully exist.

The conversations happening right now — about research, data, UX’s future — all point to the same quiet truth:

Understanding users isn’t about accumulating more evidence. It’s about staying close to the human cost of our decisions.

And that work doesn’t scale neatly. It requires care. Attention. And the willingness to sit with ambiguity a little longer than is comfortable.

But when we do, the user stops being a segment.

They become a person again.

And the work gets clearer — not because it’s simpler, but because it’s finally honest.

Maya Chen
Maya Chen
Senior UX Researcher

Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.

TOPICS

User ResearchProduct DesignUX ResearchProduct ManagementDesign Thinking

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.