When Everyone Is Performing: The Quiet Skill Product Teams Are Losing
Back to Blog

When Everyone Is Performing: The Quiet Skill Product Teams Are Losing

Across interviews, user feedback, and AI products, something subtle is happening: people are performing instead of telling the truth. What that’s costing our judgment—and how to listen differently.

Jordan TaylorJordan Taylor
7 min read

The Moment I Noticed the Pattern

Last week, I sat in on two conversations that should have felt completely different.

The first was a mock product management interview. The candidate was sharp, articulate, and clearly well-prepared. They walked through a prioritization framework I’ve seen dozens of times. Clean structure. Confident delivery. No wasted words. If you were scoring the interview, they were doing everything right.

The second was a feedback call with one of a startup’s first fifty users. This user also sounded confident. They praised the product, called it “intuitive,” and said they’d recommend it to others. The team left the call energized.

But in both cases, I had the same uncomfortable feeling: nothing real had actually been said.

That’s the pattern I keep seeing across product design and research conversations right now. Interviews, user feedback, even internal strategy discussions are becoming performances. Well-intentioned. Polished. And quietly uninformative.

The deeper question isn’t whether people are lying. It’s whether the environments we’ve created still make it safe—or even possible—to tell the truth.


The Rise of Rehearsed Competence

If you spend any time in product communities right now, you’ll notice how much attention is being paid to preparation.

  • Complete interview guides
  • Frameworks for every question
  • Scripts for user interviews
  • Templates for feedback synthesis

On the surface, this feels like progress. We’re helping people navigate complexity. We’re lowering barriers to entry. We’re making expectations clearer.

But there’s a tradeoff we rarely acknowledge: as preparation increases, signal often decreases.

In interviews, candidates aren’t just demonstrating how they think—they’re demonstrating how well they’ve learned what good looks like. In early user feedback, people aren’t describing their real experience—they’re responding to subtle social cues about what’s helpful, polite, or impressive.

I’ve been on both sides of this.

As a product manager, I’ve hired candidates who could flawlessly apply a prioritization matrix but struggled when priorities collided in messy, human ways. As a consultant, I’ve advised teams who celebrated glowing early feedback—only to watch activation stall once the product hit a less forgiving audience.

Competence that’s rehearsed often looks indistinguishable from competence that’s real—until reality pushes back.

This matters because product work doesn’t fail in controlled environments. It fails in the gray zones we didn’t rehearse for.


Why the First 100 Users Aren’t Telling You the Truth

One of the most shared pieces of advice lately is that your first users will lie to you. I don’t think that’s quite right.

Most early users aren’t lying. They’re performing their role in the relationship you’ve implicitly set up.

Early adopters know they’re early. They often:

  • Want you to succeed
  • Don’t want to sound uninformed
  • Assume rough edges are temporary
  • Translate confusion into encouragement

In a 2023 study by the Nielsen Norman Group, researchers found that users are significantly more likely to report satisfaction in early-stage usability tests than their behavior later suggests, especially when they believe the team is still “figuring things out.” The politeness bias is real.

I saw this firsthand with a B2B tool I worked on two years ago. Our first cohort of users rated onboarding an average of 4.6 out of 5. The team felt confident.

Then we looked at the data:

  • Only 38% completed setup without external help
  • Support tickets spiked in week two
  • Feature adoption plateaued after initial exploration

When we followed up with interviews framed around specific moments (“What happened right before you reached out to support?”), the story changed. Users admitted they were confused—but didn’t think that feedback would be useful early on.

The issue wasn’t dishonesty. It was misplaced care.

As product teams, we often reward affirmation more than friction. And people notice.


Interviews, AI, and the Performance Trap

This same dynamic is now showing up in how teams talk about AI-assisted products.

Many dashboards are crowded with AI features that demo beautifully but disappear into the background of real workflows. Not because AI isn’t valuable—but because we’ve optimized for presentation rather than integration.

There’s a reason some of the most thoughtful voices are saying AI should be invisible. When intelligence becomes a selling point instead of a support system, users start interacting with it differently. They test it. They perform for it. They try to use it “correctly.”

That changes the data you collect.

In one enterprise rollout I advised on, the team proudly shared that 72% of users had tried the AI assistant within the first month. Usage looked great.

But when we dug deeper:

  • Fewer than 25% used it more than twice
  • Most queries were exploratory, not task-driven
  • Power users actively avoided it for critical work

The assistant wasn’t failing technically. It was failing socially. It asked users to engage with it rather than letting them work through it.

When products invite performance, they get curiosity—not commitment.

This is the same mistake we make in interviews and research sessions: confusing articulate interaction with durable value.


What Listening Looks Like When You Stop Scoring

So what do we do instead?

The answer isn’t to abandon structure or preparation. It’s to change what we’re optimizing for.

Over time, I’ve found that the most useful insights come from moments that feel inefficient:

  • Long pauses
  • Side stories
  • Contradictions
  • Mild discomfort

In user interviews, this means asking fewer questions and staying with the ones that land awkwardly. In hiring, it means probing past the framework into lived tradeoffs. In product strategy, it means resisting the urge to translate everything into a neat narrative too quickly.

Here are a few practices that have consistently helped me—and the teams I work with—move past performance:

  1. Anchor conversations in recent, specific moments
    Not “How do you usually prioritize?” but “Tell me about the last time two priorities conflicted.” Memory is a better truth-teller than opinion.

  2. Reward uncertainty
    When someone says, “I’m not sure,” don’t rush to help. That’s often where real thinking starts.

  3. Separate validation from learning
    Make it explicit when you’re exploring, not evaluating. People speak differently when they don’t feel scored.

  4. Watch for effort, not elegance
    Smooth answers are rarely where the work is. Effortful explanations usually are.

None of this is revolutionary. But it requires patience—and patience is in short supply right now.


The Skill We’re Actually Hiring and Designing For

There’s a quiet irony in all of this.

At the same time we’re saying judgment is the most important skill in product work, we’re creating systems that reward performative certainty. We train people to sound confident, to present clean stories, to minimize mess.

But real product judgment is forged in ambiguity. It shows up when:

  • The data is incomplete
  • Users contradict themselves
  • Tradeoffs have no clean answer
  • Progress feels slower than it should

According to a 2024 internal survey I ran across three mid-sized product organizations, over 60% of PMs said their biggest decisions were made with “moderate to low confidence,” yet fewer than 20% felt comfortable expressing that uncertainty publicly.

That gap is costly.

When teams feel pressure to perform, they optimize for defensibility instead of understanding. They choose features that are easy to justify, metrics that are easy to report, and narratives that are easy to align around.

And slowly, the work drifts away from the people it’s supposed to serve.


Coming Back to Care

The reason this trend worries me isn’t methodological. It’s human.

Most people I know in product genuinely care about doing good work. They want to build things that help. They want to make thoughtful decisions. They want to listen.

But the environments we’ve built—interviews, feedback loops, AI demos, strategy reviews—are increasingly optimized for performance over presence.

Care sounds different than competence. It’s quieter. Less polished. More tentative.

If there’s one thing I hope we reclaim, it’s the willingness to slow down conversations enough that something unscripted can emerge. That might mean fewer frameworks on the first pass. Fewer dashboards in the early days. Fewer conclusions drawn before the discomfort has had time to teach us something.

Because the most important signals rarely announce themselves. They show up when no one is trying to impress.

And those are the moments worth designing—and listening—for.

Jordan Taylor
Jordan Taylor
Product Strategy Consultant

Jordan helps product teams navigate complexity and make better decisions. She's fascinated by how teams balance user needs, business goals, and technical constraints.

TOPICS

Product ManagementUser ResearchProduct StrategyUX ResearchDecision Making

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.