Where the Evidence Lives When No One Is Watching
Back to Blog

Where the Evidence Lives When No One Is Watching

Across error logs, safeguarding debates, and frustration with empathy theatre, a deeper pattern is emerging: evidence keeps showing up, but accountability often doesn’t. This is about where real signals live—and what it means to take responsibility for them.

Jordan TaylorJordan Taylor
7 min read

The Quiet Signals We Keep Stepping Over

Last week, I sat in on a product review where a team confidently walked through their latest usability findings. Clean quotes. Clear themes. A tidy deck. Halfway through, an engineer interrupted and asked a simple question: “Did anyone look at the error logs from the same period?”

The room went quiet—not defensive, just uncertain. Someone eventually said, “That felt out of scope for this round.” And that was that. We moved on.

I’ve been thinking about that moment because it keeps showing up in different forms across the product design and research conversations I’ve been following. Whether it’s the push to review system error logs, the re‑emergence of safeguarding in post‑COVID fieldwork, or the ongoing discomfort with what some are calling empathy theatre, there’s a shared tension underneath it all.

We are producing more research artifacts than ever. But we’re still uneasy about where real evidence lives—and what responsibility comes with acknowledging it.

Evidence Isn’t Always Where We’ve Been Trained to Look

UX has taught generations of practitioners to look for insight in structured places: interviews, usability tests, surveys, journey maps. Those methods matter. I’ve built products on them. But lately, the most meaningful signals I see aren’t coming from those polished spaces.

They’re coming from the margins:

  • A spike in backend errors after a “successful” onboarding redesign
  • Support tickets written at 2 a.m. that never make it into a research repo
  • Safeguarding concerns raised quietly by a field researcher after an in‑person visit

One recent piece making the rounds argued that UX practitioners should regularly review system error logs. On the surface, that sounds technical—maybe even adjacent to our craft. But the deeper implication is uncomfortable: some of the most honest user feedback happens when users aren’t trying to help us at all.

Error logs don’t perform empathy. They don’t soften language. They just record where reality breaks.

There’s data to back this up. A study from the University of Maryland found that over 50% of critical usability issues in complex systems never surface in moderated usability testing, but do appear in system logs and support data within weeks of launch. That’s not a failure of research—it’s a reminder that methods have blind spots.

As product managers and designers, the decision isn’t whether interviews or logs are “better.” It’s whether we’re willing to integrate evidence that doesn’t arrive neatly labeled as research.

If your product is hurting people in quiet ways, the evidence won’t ask for permission to exist.

When Research Becomes a Performance (And Why Teams Feel It)

Another trend I keep seeing is frustration with what’s been called empathy theatre—the ritual of user research performed more to signal values than to inform decisions.

I’ve seen this firsthand, especially in early‑stage startups. Founders proudly describe how “user‑centric” they are, while research insights rarely change the roadmap. Interviews are conducted, quotes are shared, but decisions are already made.

This isn’t usually malicious. It’s structural.

In one startup I advised, the team interviewed 12 customers over two weeks. They surfaced clear evidence that their core assumption about who the product was for was wrong. When they actually acted on it, the company unlocked a multi‑million‑dollar enterprise contract within a quarter.

That story circulates as a win for customer interviews—and it is. But the quieter lesson is this: research only creates value when it has the power to invalidate plans.

Empathy theatre happens when research is allowed to decorate decisions, not challenge them.

You can usually spot it by these signals:

  • Research timelines that conveniently end right before major commitments
  • Insights framed as “interesting” rather than consequential
  • No clear owner responsible for what happens when evidence conflicts with strategy

The cost isn’t just wasted time. Teams feel the disconnect. Researchers feel it when their work doesn’t travel. Designers feel it when usability issues resurface post‑launch. Engineers feel it when avoidable errors hit production.

Over time, people stop bringing uncomfortable evidence forward.

Safeguarding Is a Product Decision, Not a Research Footnote

Post‑COVID, more teams are returning to in‑person research—especially in health, education, and social care contexts. Alongside that shift, there’s been a renewed call to revisit safeguarding plans.

I’m glad this conversation is happening, because safeguarding often gets treated as a compliance checkbox rather than a design constraint.

One researcher recently shared how their pre‑pandemic safeguarding template failed entirely in a post‑COVID environment. Participants were more vulnerable. Settings were more volatile. Risks weren’t hypothetical anymore.

This matters for product teams because safeguarding failures aren’t just ethical issues—they’re product failures.

When we put people in situations where they feel exposed, unsafe, or emotionally unsupported, we skew the data and harm trust. According to the UK’s Health Research Authority, over 30% of research incidents reported in 2023 involved inadequate risk assessment for participant wellbeing, many tied to outdated protocols.

From a product strategy perspective, safeguarding shapes what evidence you can trust.

Here’s what I’ve learned working with teams navigating this:

  1. Safeguarding decisions should sit alongside research design, not underneath it.
  2. If a method puts participants at risk, the insight isn’t worth it. Full stop.
  3. Teams need a clear escalation path when something feels off—without penalizing the person who raises it.

These aren’t soft considerations. They directly affect signal quality. People don’t give honest feedback when they’re managing fear or discomfort.

The Common Thread: Accountability for What We Learn

Looking across these conversations—error logs, safeguarding, empathy theatre—I see a shared thread: accountability.

Who is responsible for evidence once it exists?

In many organizations, the answer is unclear. Researchers collect insights. Designers translate them. Product managers weigh trade‑offs. Somewhere along the line, responsibility diffuses.

But evidence doesn’t disappear just because no one owns it.

System logs continue to record failures. Participants carry their research experiences with them. Customers remember when feedback didn’t matter.

The most resilient product teams I’ve worked with treat evidence less like an artifact and more like a relationship.

They do a few things consistently:

  • They broaden what counts as user feedback, including operational data and edge cases
  • They assign clear decision ownership when evidence challenges assumptions
  • They revisit past signals, not just the latest research cycle

One Fortune 500 team I supported built a simple practice: every quarterly roadmap review started with a 30‑minute walkthrough of unresolved user pain—pulled from logs, support tickets, and prior research. No solutions allowed. Just evidence.

It wasn’t glamorous. But within six months, their post‑release defect rate dropped by 18%, and customer satisfaction scores stabilized after years of volatility.

The work wasn’t faster. It was more honest.

Designing for the Signals That Don’t Ask for Attention

As product leaders, we like clarity. Roadmaps. Metrics. Dashboards. But many of the signals that matter most arrive quietly, without ceremony.

They show up as:

  • A repeated error no one’s prioritized
  • A researcher hesitating to run a session because something feels unsafe
  • A customer interview that contradicts the narrative everyone prefers

These moments test judgment more than process.

My own framework—earned the hard way—comes down to three questions:

  1. What evidence exists that we’re not actively looking at?
  2. Who bears the cost if we ignore it?
  3. What decision would this evidence force if we took it seriously?

If you can’t answer the third question, you’re probably doing empathy theatre.

If you can answer it—but avoid it—you’re making a strategic choice, whether you admit it or not.

Closing the Gap Between Care and Consequence

I don’t think the product design and research community lacks empathy. If anything, we care deeply. But care without consequence turns into performance.

The deeper shift I’m seeing—and hoping for—is toward earned confidence. Confidence that comes from engaging with evidence even when it’s inconvenient. From protecting participants even when timelines are tight. From letting data actually change our minds.

When no one is watching, our systems still tell the truth. Our users still experience the product. Our decisions still land somewhere real.

The question isn’t whether we have enough research.

It’s whether we’re willing to be accountable for what we already know.

And that, more than any method or framework, is what builds products—and teams—that people trust.

Jordan Taylor
Jordan Taylor
Product Strategy Consultant

Jordan helps product teams navigate complexity and make better decisions. She's fascinated by how teams balance user needs, business goals, and technical constraints.

TOPICS

User ResearchProduct DesignUX ResearchProduct ManagementDecision Making

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.