When Products Ask Us to Prove Ourselves
Back to Blog

When Products Ask Us to Prove Ourselves

Across AI tools, KYC flows, and dense interfaces, users are going quiet at the same moment: when products ask them to prove who they are. That silence isn’t apathy—it’s care.

Maya ChenMaya Chen
6 min read

The Moment People Go Quiet

In a recent research session with undergraduates, I watched a familiar shift happen. We were testing a learning tool with an embedded AI assistant. The students navigated confidently at first—skimming prompts, clicking through examples. Then the assistant invited them to "ask anything."

Almost everyone paused.

One student laughed softly and said, "I don’t know what it wants from me." Another closed the panel entirely. These weren’t novices. They used AI tools daily. But when the system asked them to speak—to initiate, to reveal intent—something tightened. The room got quieter. The energy changed.

I’ve been noticing this quiet across the product conversations I follow: students hesitant to talk to AI, teams debating heavier KYC flows, designers arguing whether we should show everything again or simplify further. On the surface, these feel like separate debates. Underneath, they share a deeper tension: the moment a product asks someone to prove who they are, or what they want, carries a real psychological cost.

As researchers, we’re good at measuring friction. We’re less practiced at sitting with what that friction means to the people experiencing it.

Identity Is a Request, Not a Field

In UX, we often treat identity as a form to complete. Name, password, verification code. In product conversations, it shows up as KYC, authentication, or personalization. But in sessions, identity behaves more like a negotiation.

When we ask users to verify themselves, we’re not just preventing fraud. We’re asking for exposure.

One fintech study I worked on last year tested a revised KYC flow intended to feel more transparent. We added explanations for why each document was needed and how long verification would take. Objectively, the flow was better. Subjectively, something else surfaced. Participants described the process as "being evaluated" and "hoping I don’t do something wrong."

That emotional framing matters. Industry data shows that KYC steps can cause drop‑off rates between 20–40%, depending on complexity. We often attribute that loss to impatience. In interviews, people told a different story: uncertainty about how their information would be judged, stored, or used against them later.

This reframes the work. The core question isn’t how do we verify faster? It’s:

  • Have we acknowledged the vulnerability we’re asking for?
  • Have we made the criteria legible, not just the steps?
  • Have we shown what happens if someone hesitates or makes a mistake?

When identity becomes a test, silence is a rational response.

People don’t resist verification because they’re careless. They resist because they’re being careful.

Why “Talk to the AI” Feels Risky

The same dynamic shows up in student reluctance to engage with AI tools. Recent surveys suggest that while over 70% of college students have tried generative AI, far fewer use it conversationally or reflectively. Many stick to copy‑paste prompts or one‑off tasks.

In sessions, students explained why:

  • "I don’t want to ask something stupid."
  • "What if it remembers this?"
  • "I’m not sure what’s allowed."

These aren’t usability issues. They’re social and cognitive risks. Talking to an AI feels like performing competence without knowing the rules of the room.

One student compared it to office hours: "You only go when you’re sure you have a good question." That analogy stuck with me. We’ve designed AI as always‑available, but we haven’t designed for the anxiety of being seen—even by a system.

From a design perspective, this explains why feature adoption stalls even when capability is high. From a human perspective, it’s completely understandable.

Practical shifts I’ve seen help:

  • Model imperfect questions. Showing messy, half‑formed prompts reduces performance pressure.
  • Make forgetting visible. Explicitly communicate what isn’t stored or remembered.
  • Offer low‑stakes entry points. Reactions, sliders, or examples that don’t require authorship.

None of these are technical breakthroughs. They’re signals of safety.

The Return of “Show Everything” — and Why It’s Not Enough

Another conversation gaining momentum argues that minimalism has gone too far—that users want dense interfaces again. I agree with the diagnosis, but not always the prescription.

Information density doesn’t automatically create trust. In some contexts, it increases the burden of judgment.

I’m reminded of a QA discussion circulating recently: Why do we still ask users to enter their password twice? Testing shows it doesn’t meaningfully reduce errors. But it persists because it feels like diligence. To the system.

For users, it often feels like suspicion.

When we “show everything” without helping people interpret it, we’re shifting responsibility onto them. That’s fine when stakes are low. It’s exhausting when they’re not.

Research on cognitive load consistently shows that error rates increase when users must self‑validate under uncertainty, even with more information available. The problem isn’t visibility—it’s unsupported judgment.

Better patterns I’ve seen:

  1. Progressive disclosure with rationale — not just hiding, but explaining why.
  2. System confidence cues — showing when the product is certain, so users don’t have to be.
  3. Explicit permission to proceed imperfectly — acknowledging that mistakes are expected and recoverable.

These patterns respect autonomy without demanding vigilance.

Designing for Dignity in High‑Friction Moments

Across KYC flows, AI conversations, and dense interfaces, the same principle applies: high‑friction moments deserve more care, not just more clarity.

Care shows up in small decisions:

  • The copy that says "You can come back to this later"—and means it.
  • The empty state that reassures instead of instructs.
  • The metric review where we ask who left quietly, and why?

One case that stays with me involved a healthcare portal redesign. Drop‑offs during identity verification were high. Instead of optimizing steps, the team added a single screen explaining what would happen if verification failed—and how to get help. Completion rates improved modestly. Support calls dropped significantly. More importantly, interviewees described feeling "less judged."

That’s not a KPI we track often. But it’s one people remember.

Good products don’t just reduce effort. They reduce the fear of being wrong.

Coming Back to the Quiet

When people hesitate, close panels, or abandon flows, it’s tempting to see indecision. What I see more often is discernment.

The current conversations in our field—about AI adoption, verification, information density—are circling the same truth: users are navigating social risk, not just usability. They’re deciding when it’s safe to speak, to identify themselves, to trust that the system will respond with care.

As researchers and designers, our job isn’t to eliminate that risk entirely. It’s to acknowledge it, design with it, and never mistake silence for apathy.

Back in that student session, we changed one thing before the next round. Instead of asking them to “ask anything,” we offered a few imperfect starting points and said, "You can ignore these." The conversations opened up. Not because the AI got smarter—but because the room felt safer.

That’s the work I hope we keep doing. Not just building systems that work, but systems that let people show up without having to prove themselves first.

Maya Chen
Maya Chen
Senior UX Researcher

Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.

TOPICS

User ResearchProduct DesignUX ResearchDesign ThinkingHuman-Centered Design

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.

Why Product Friction Feels Personal to Users