The Unasked Question: What Today’s Product Conversations Reveal About Intent
Back to Blog

The Unasked Question: What Today’s Product Conversations Reveal About Intent

Across AI, SaaS, and product leadership, a pattern is emerging: people aren’t confused by our tools — they’re unsure of our intent. What silence is really telling us.

Alex RiveraAlex Rivera
8 min read

The Moment I Started Listening for What Wasn’t Said

Last week, I sat in on a student research session that should have been routine. The task was simple: use an AI assistant to explain a concept from their coursework. The interface was familiar. The capability was there. And yet, after reading the prompt, the student hesitated.

Not a glitch. Not confusion about how to click. A pause.

They eventually typed something cautious and oddly formal, like they were writing to a stranger who might judge them. When we asked why they didn’t just ask what they actually wanted to know, they shrugged and said, “I didn’t know how much I should say.”

That sentence has been echoing for me as I’ve watched the last few days of product conversations unfold. Posts about students not talking to AI. About people-first leadership. About metrics that arrive too late. About SaaS economics that no longer add up. On the surface, these are separate threads. But underneath them is a shared tension we don’t name enough:

People aren’t struggling with our tools. They’re unsure of our intent.

As designers and researchers, we spend years refining usability. But what I’m seeing right now is less about whether something works and more about whether it feels safe, worth it, or even appropriate to engage.

Silence Is Not a Usability Problem

The recent piece on why students don’t talk to AI struck a nerve because it mirrors something many of us have seen firsthand. The systems are capable. The affordances are clear. And still, engagement is shallow.

It’s tempting to diagnose this as an onboarding issue. Or a literacy gap. Or even generational hesitation. But when you sit with people long enough, a different pattern emerges.

Students aren’t silent because they don’t know how to talk to AI.

They’re silent because they don’t know:

  • Who is on the other side of the conversation
  • What will happen to what they say
  • Whether asking a “bad” question will cost them later

In one university study last year, only 38% of students reported feeling comfortable using generative AI for exploratory learning, even though over 70% had access and basic familiarity. The gap wasn’t capability. It was confidence.

From a design perspective, this is crucial. We often treat conversation as a feature: text in, text out. But conversation is a relationship. And relationships require clarity of intent.

When intent is fuzzy, people default to caution. They minimize. They perform. Or they disengage entirely.

This isn’t unique to AI. We see it in early-stage products all the time:

  • Users who sign up but never quite lean in
  • Teams who hit feature adoption but not trust
  • Customers who comply with flows but don’t volunteer insight

The absence of feedback isn’t neutral. It’s a signal.

People-First Is Not a Value Statement — It’s a Practice

Another trend making the rounds argues that people-first products start with people-first leaders. I agree — but I think we underestimate how literal that needs to be.

Being people-first doesn’t start with empathy decks or mission statements. It starts with how decisions are explained and made visible.

I’ve watched leadership teams genuinely care about users while still building systems that obscure intent:

  • Metrics dashboards that show outcomes but not reasoning
  • Roadmaps that explain what’s coming, not why
  • Research readouts that summarize findings without context or uncertainty

Internally, this trains teams to optimize for compliance instead of understanding. Externally, it teaches users that the system has goals — they’re just not invited into them.

One healthcare startup I worked with learned this the hard way. They were shipping quickly, measuring everything, and hitting their targets. But patient engagement plateaued.

When we finally slowed down and ran a consistent interview rhythm — 20 conversations over three months, costing less than $500 total — a pattern emerged. Patients didn’t distrust the product. They distrusted the silence around it.

They didn’t know:

  • Why certain questions were asked
  • How their data informed care decisions
  • What changed because they participated

Once the team started closing that loop — explicitly showing intent and outcome — engagement rose by 22% without changing a single core feature.

People-first leadership isn’t about being nice. It’s about being legible.

When Metrics Arrive After Meaning Has Left

Several conversations this week circled around leading vs. lagging metrics. It’s a familiar critique: we measure too late, then wonder why decisions feel reactive.

But I think the deeper issue is what we choose to measure.

Lagging metrics are seductive because they’re clean. Revenue. Retention. Task completion. They tell us what happened. But they tell us almost nothing about how people felt while it was happening.

In design systems work, we talk a lot about consistency. But consistency without context can actually increase cognitive load. Users learn the patterns, but not the purpose.

Here’s what often gets missed:

  • A user can complete a task and still feel uneasy
  • A customer can renew and still feel trapped
  • A student can use AI and still feel exposed

By the time churn shows up, the emotional decision has already been made.

Some teams are experimenting with what I’d call intent metrics:

  1. Do users volunteer information without being prompted?
  2. Do they return with unstructured questions?
  3. Do they reference past interactions as shared context?

These are messy signals. They don’t fit neatly into dashboards. But they’re leading indicators of trust.

One SaaS team I advise noticed that while usage was flat, the length of user-submitted questions was shrinking. People were asking just enough to get by. That was the real warning sign — months before revenue dipped.

The Economics of Not Being Invited In

The brutal truth posts about SaaS economics are uncomfortable because they surface something many founders feel but don’t articulate: the old growth stories don’t work anymore.

But I don’t think the problem is saturation alone. It’s relational debt.

We’ve built an industry around renting tools without earning trust. Subscriptions scale faster than relationships — until they don’t.

When people don’t feel invited into the intent of a product, they treat it transactionally. They minimize usage. They compare constantly. They leave when the math stops working.

This is where features like trace labels or purpose annotations — often dismissed as “enterprise-only” — become interesting. Not because of their functionality, but because of what they acknowledge:

Every connection in a system exists for a reason. Someone made a choice.

Making that visible changes how people engage. It turns systems from black boxes into conversations.

In one requirements tool rollout I observed, teams that used trace labels saw fewer handoff errors — but more importantly, they reported higher confidence in decisions. They understood not just what was linked, but why.

That same principle applies to user-facing products. When intent is explicit, people invest more of themselves.

Designing Invitations, Not Just Interfaces

As a designer, this is where my attention keeps landing lately. On invitations.

An interface can be usable without being inviting. A flow can be efficient without being welcoming. And a product can succeed in the short term without ever earning real engagement.

Designing an invitation means asking:

  • What are we asking of this person, emotionally?
  • What risk are they taking by engaging?
  • How clearly do we explain our side of the bargain?

Practically, this shows up in small but meaningful ways:

  • Explaining why a question is asked, not just that it’s required
  • Showing how past input shaped the present experience
  • Leaving space for uncertainty instead of forcing confidence

In AI products especially, this matters. Task-free intelligence tests and benchmark scores tell us what models can do. They tell us nothing about whether people feel comfortable thinking out loud with them.

The students I mentioned at the beginning didn’t need a better prompt template. They needed reassurance that curiosity wouldn’t be penalized.

What This Means for Our Work

Across all these conversations — education, leadership, metrics, economics — I see the same quiet shift. People are less willing to give the benefit of the doubt.

Not because they’re cynical. Because they’re tired.

Tired of systems that take without explaining. Of products that optimize without listening. Of being called a “user” when they’re really a participant.

As product designers and researchers, we’re in a position to notice this early. To listen for hesitation. For formality. For silence.

Those moments aren’t friction to be smoothed away. They’re questions being asked without words.

If we can learn to answer them — clearly, humbly, and in context — we won’t just build better products.

We’ll build relationships people actually want to show up for.

And in a landscape where everything is measurable, that might be the advantage that still can’t be copied.

Alex Rivera
Alex Rivera
Product Design Lead

Alex leads product design with a focus on creating experiences that feel intuitive and human. He's passionate about the craft of design and the details that make products feel right.

TOPICS

User ResearchProduct DesignUX ResearchProduct ManagementDesign Thinking

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.

What Product Silence Really Reveals About User Intent