When Research Becomes Performance, Not Care
Back to Blog

When Research Becomes Performance, Not Care

Research hasn’t disappeared—but its role has quietly shifted. A reflection on empathy theatre, AI-sanitized insight, and what it really takes for user research to change decisions.

Maya ChenMaya Chen
7 min read

The Moment Everyone Nods—and Nothing Changes

In a recent synthesis meeting, a founder leaned back in their chair and said, almost proudly, “This confirms what we already suspected.” Around the table, people nodded. The deck was clean. Quotes were highlighted. The research had landed.

What stayed with me wasn’t the content of the insight, but the relief in the room. Research hadn’t challenged anything. It hadn’t complicated the roadmap. It had performed its role, affirmed the plan, and exited quietly.

I’ve been noticing versions of this moment everywhere lately—in Medium posts about “empathy theatre,” in Reddit threads asking when insights stop influencing decisions, in private conversations where researchers confess they’re starting to trust AI summaries more than messy human conversations. The tension underneath all of it feels the same: we still talk about empathy, but we increasingly design systems that don’t actually have room for it.

This isn’t about bad intentions. Most teams I work with genuinely care. But somewhere between shipping pressure, AI acceleration, and the professionalization of research, something subtle has shifted. Research is still happening. It’s just no longer doing the work we say it’s for.

Empathy Theatre and the Safety of Agreement

The phrase “empathy theatre” has been circulating for a while, but it’s gaining sharper edges now. It names a pattern many of us recognize instinctively: research conducted to be seen, not to be felt.

In practice, it looks like this:

  • Research questions framed to validate a direction, not to interrogate it
  • Carefully selected clips that make users sound reasonable, aligned, and grateful
  • Insights that are technically true, but strategically harmless

The danger isn’t that the research is wrong. It’s that it’s safe.

One team I worked with last year ran twelve user interviews before a major launch. The findings were solid: users understood the value proposition, liked the interface, and could complete key tasks. But buried in the transcripts was a quieter pattern—participants repeatedly hedged when asked if they’d switch from their current tool. “Maybe,” they said. “If my team agreed.” “If it didn’t slow things down.”

Those hesitations never made it into the readout. They were harder to quantify, harder to defend, and harder to act on. So the team shipped. Adoption stalled.

Later, when we revisited the data, a PM said something that still echoes for me:

“I think we were more comfortable proving people liked it than sitting with the risk that they might not choose it.”

That’s empathy theatre at its core. It’s not fake concern—it’s managed concern. Concern that stays within the boundaries of what the organization is ready to hear.

Why AI Feels More Trustworthy Than Users Right Now

Another thread running through these conversations is our growing reliance on AI-generated insight. I’ve heard researchers admit—sometimes sheepishly, sometimes with relief—that they trust AI summaries more than raw interviews. At least the AI is consistent. At least it doesn’t contradict itself mid-sentence.

There’s a psychological comfort here that’s worth naming.

Human data is destabilizing. People change their minds. They say one thing and do another. They pause, backtrack, contradict themselves. As a researcher, I’ve learned to see those moments as gold. As an organization, we often experience them as noise.

AI smooths that noise away.

Recent studies suggest that while LLMs can accurately summarize qualitative data, they tend to reduce variance and ambiguity, emphasizing dominant themes over edge cases. One 2024 analysis found that AI-generated syntheses underrepresented minority viewpoints by up to 30% compared to human-led thematic analysis. That’s not malicious—it’s statistical.

But those edge cases are often where design decisions actually live.

In one enterprise analytics product I advised on, the AI summary confidently stated that users wanted “faster reporting.” True—but incomplete. The interviews revealed something more specific and more human: analysts were anxious about being questioned in meetings. Speed mattered because confidence mattered. Without that context, the team optimized load times and missed the deeper opportunity to design for credibility and trust.

AI didn’t fail here. We failed by asking it to replace judgment rather than support it.

When Insight Loses Its Power

A question that surfaced repeatedly in the last day’s discussions is deceptively simple: When does insight stop influencing product decisions?

After years in this field, my answer is: when insight threatens identity.

Not just company identity, but personal identity.

  • The founder who built their self-worth around being intuitive
  • The PM who’s rewarded for decisiveness, not doubt
  • The designer whose taste has been validated by past success

Research that aligns with these identities is welcomed. Research that destabilizes them is politely acknowledged and quietly sidelined.

This explains a pattern I see again and again: teams agree with the research in principle, then proceed as if they hadn’t heard it.

It’s rarely about prioritization alone. It’s about incentives, yes—but also about emotional cost. Changing direction means admitting that earlier decisions, late nights, and strong opinions might need revision. That’s not a rational hurdle; it’s a human one.

There’s data to support this discomfort. Behavioral psychology research shows that people are up to 2x more likely to discount evidence that contradicts their prior commitments, even when that evidence is high quality. In product teams, those commitments are often public, social, and tied to reputation.

So insights don’t fail because they’re weak. They fail because they ask too much of the system receiving them.

What Care Looks Like in Practice

If empathy theatre is the performance, what does the real thing look like?

In my experience, it’s quieter—and often less impressive.

Care shows up in small, sometimes uncomfortable choices:

  • Letting an unresolved tension stay unresolved in a readout
  • Playing a clip where a user sounds unsure, not eloquent
  • Naming what the research doesn’t answer, even when leadership wants closure

One of the most meaningful shifts I’ve seen came from a team that changed how they framed synthesis meetings. Instead of leading with “key insights,” they started with “open questions we’re now responsible for.” It subtly moved the goal from consensus to stewardship.

Another team embedded a simple but radical practice: before finalizing a decision, someone had to articulate how the research made that decision harder. If it didn’t, they revisited the data.

These aren’t frameworks. They’re cultural signals.

And they matter, especially now, as many researchers are questioning their future in this field. I’ve spoken with peers considering moves into clinical psychology, not because they dislike research, but because they’re tired of watching care get abstracted away. That should worry us.

If we want research to matter again, we have to protect its most inconvenient qualities:

  1. Its slowness – Understanding people takes time, even when tools get faster.
  2. Its ambiguity – Clarity earned too quickly is often borrowed from assumptions.
  3. Its emotional weight – If insights don’t cost us something, they’re probably not deep enough.

Coming Back to the People in the Room

I keep thinking about that synthesis meeting—the nodding, the relief, the quiet sense of closure. On paper, it was a success. But research isn’t here to help us feel done. It’s here to help us feel responsible.

Responsible to the people who took time to talk to us. Responsible to the complexity of their lives. Responsible to the decisions we’ll make on their behalf.

The conversations happening right now tell me the community feels this gap, even if we’re naming it in different ways. We’re tired of pretending that empathy is something you can schedule, automate, or perform on cue.

Real empathy is disruptive. It changes things. Sometimes uncomfortably.

And maybe that’s the deeper insight emerging from all this noise: research didn’t become useless. We just stopped building organizations that know how to be changed by it.

If we can relearn that—patiently, imperfectly—we might find our way back to the kind of work that feels honest again. Not impressive. Not frictionless. Just human.

Maya Chen
Maya Chen
Senior UX Researcher

Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.

TOPICS

User ResearchProduct DesignUX ResearchProduct ManagementDesign Thinking

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.

When Research Becomes Performance, Not Care