What We’re Validating Now Isn’t the Product — It’s the Relationship
Back to Blog

What We’re Validating Now Isn’t the Product — It’s the Relationship

As AI reshapes MVPs and smooths over friction, early success can hide quiet uncertainty. What if what we’re validating now isn’t the product — but the relationship?

Maya ChenMaya Chen
8 min read

The moment that made me uneasy

In a fintech research session a few weeks ago, a participant finished onboarding in under three minutes. No errors. No questions. The AI assistant nudged them through each step, auto-filled most fields, and confirmed everything with a cheerful confidence.

When I asked how they felt about it, they smiled politely and said, “Seems fine.” Then they hesitated. A long one. Finally: “I’m not sure what it’s doing on my behalf, though. I guess I’ll find out later.”

We marked the task as a success. The metrics certainly did. But I left that session with a knot in my stomach. Because what we’d really validated wasn’t usability or value — it was how willing someone was to defer understanding.

That tension has been surfacing across the conversations I’ve been watching this week: AI redefining MVPs, renewed explainers on what UX research even is, quiet warnings about trust breaking before metrics do. On the surface, they look like separate threads. But underneath, they’re circling the same question:

What are we actually testing when products act for people before they understand them?

When MVPs become promises, not experiments

For years, MVPs were about exposure. You shipped something small and imperfect so people could react to it — sometimes generously, sometimes harshly. The product showed its seams, and users told you where it didn’t hold.

AI has changed that dynamic.

In many AI-first products, especially in fintech, the MVP no longer looks minimal. It looks confident. Automated decisions, polished language, reassuring summaries. The system often performs well enough out of the gate that teams interpret early adoption as validation.

But what’s being validated has shifted.

Instead of asking:

  • Does this solve a real problem?
  • Is the value clear enough that people will tolerate rough edges?

We’re implicitly asking:

  • Will people trust this without fully understanding it?
  • How much ambiguity will they accept if the outcome feels convenient?

A 2024 McKinsey study found that 65% of consumers say they’re comfortable with AI making financial recommendations, but only 28% say they understand how those recommendations are generated. That gap isn’t just a knowledge problem — it’s a relationship problem.

From a research perspective, this is tricky. Traditional MVP signals — activation, task completion, even short-term retention — can look strong while people are quietly withholding confidence. They’re not opting in so much as going along.

Practical insight

When evaluating AI-driven MVPs, I’ve started treating early success metrics as stress tests for trust, not proof of value. If the product performs an action:

  • Ask what the user thinks they did versus what the system did
  • Listen for language like “I guess,” “it probably,” or “I assume”

Those phrases often mark deferred judgment — not confidence.

Recognition over recall — and the cost of cognitive relief

Another thread gaining traction is the principle of recognition over recall, especially in AI interfaces. The idea is solid: don’t make people remember commands or workflows when the system can surface options contextually.

But in practice, many AI products have interpreted this as: don’t make people think at all.

In one study we ran last year across 12 participants using an AI-powered budgeting tool, everyone completed their tasks faster than with the legacy version. Average task time dropped by 42%. Satisfaction scores ticked up.

Yet when we asked participants to explain what changed in their budget after the AI’s adjustments, only 4 out of 12 could accurately describe it.

Recognition had replaced recall — but it had also replaced comprehension.

This matters because financial tools aren’t just utilities; they’re sensemaking devices. People use them to understand their own behavior. When the interface does all the recognizing, people lose the opportunity to build a mental model.

Cognitive relief feels good in the moment. Cognitive absence shows up later, when something goes wrong.

Where this shows up in research sessions

You can often spot this in subtle ways:

  • Participants nodding along with explanations but failing to paraphrase them
  • High confidence ratings paired with vague descriptions of outcomes
  • Questions that start with “If I wanted to…” instead of “When I…”

These aren’t failures of intelligence or attention. They’re signals that the product has moved faster than understanding.

Trust doesn’t break loudly — it thins

One of the most resonant ideas circulating right now is that trust breaks before metrics do. I’ve seen this play out repeatedly, especially with AI-first products.

Trust erosion rarely looks like abandonment at first. It looks like:

  • Turning off notifications
  • Double-checking outputs elsewhere
  • Avoiding advanced features
  • Hesitating before confirming actions

In a longitudinal study we conducted with a small-business banking platform, usage remained stable for three months after the introduction of an AI cash-flow predictor. But qualitative check-ins revealed a different story.

Participants said things like:

“I still use it, but I don’t rely on it.”

“I check my spreadsheet after, just in case.”

By month four, feature engagement dropped 18%, even though overall logins stayed flat.

The system hadn’t failed. The relationship had thinned.

This is why purely quantitative validation can be misleading right now. AI smooths over friction so effectively that it masks the early signals of doubt.

Practical insight

To catch trust erosion early:

  • Pair success metrics with confidence probes (“How certain do you feel about this outcome?”)
  • Revisit the same users over time; trust is temporal, not transactional
  • Treat redundancy (people checking elsewhere) as a signal, not a nuisance

The quiet redefinition of UX research

Amid all this, I’ve noticed a renewed interest in basic questions: What is UX research? What is it for?

I don’t think that’s accidental.

As products act more autonomously, research risks becoming a confirmation layer — verifying that people didn’t object, rather than understanding how they’re making sense of what’s happening.

In sessions lately, I’ve been paying closer attention to moments that don’t fit neatly into findings decks:

  • The laugh that comes with uncertainty
  • The story that starts unrelated but reveals a coping strategy
  • The offhand comment after the task is “done”

These moments are where people renegotiate their role with the product.

One participant told me, “It’s like having a very capable intern. I trust it with drafts, not decisions.” That single sentence explained more about adoption behavior than any satisfaction score.

What this means for our practice

If UX research is to stay relevant in an AI-shaped landscape, we need to:

  1. Study sensemaking, not just behavior — ask how people explain outcomes to themselves
  2. Value longitudinal insight — understanding evolves after the first success
  3. Protect space for confusion — clarity emerges when we don’t rush past uncertainty

This isn’t about slowing teams down. It’s about making sure speed doesn’t outrun understanding.

What we’re really learning in these conversations

Across all these trends — AI MVPs, trust, recognition over recall, renewed research fundamentals — I see a shared undercurrent:

We’re no longer just designing products. We’re designing delegation.

Every time a system acts for someone, it asks a quiet question: Will you let me? Early validation often measures compliance, not consent.

The work ahead isn’t to strip AI of its power or polish. It’s to ensure that as products become more capable, people don’t become more peripheral.

That means validating not just that something works, but that people:

  • Know what it’s doing
  • Feel they could intervene if needed
  • Understand the consequences of letting it proceed

These are relational qualities. They don’t show up cleanly in dashboards. They show up in pauses, hedges, and follow-up behaviors.

Closing reflections

I keep thinking back to that participant who said, “I guess I’ll find out later.” That sentence wasn’t resistance. It was patience — a willingness to see how the relationship unfolds.

As researchers and designers, we’re being offered the same choice. We can accept early success as proof and move on. Or we can stay with the discomfort of partial understanding and ask harder questions about what people are agreeing to.

The conversations happening right now suggest many of us feel that tension, even if we’re naming it differently. My hope is that we treat it not as a problem to solve quickly, but as a signal worth sitting with.

Because in the end, the most important thing we’re validating isn’t the intelligence of our systems.

It’s the care with which we invite people to trust them.

Maya Chen
Maya Chen
Senior UX Researcher

Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.

TOPICS

User ResearchProduct DesignUX ResearchProduct ManagementDesign Thinking

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.

What AI MVPs Are Really Validating in Product Design