The Gap Between What We Buy and What We Use
Back to Blog

The Gap Between What We Buy and What We Use

Across UX debates, AI adoption, and automated research, a quiet pattern is emerging: products fail not at launch, but when they stop keeping their promises in daily use.

Jordan TaylorJordan Taylor
7 min read

I've been sitting in on a lot of product conversations lately—design critiques, research readouts, executive check-ins—and there’s a moment that keeps repeating.

Someone says, “The UX is solid.” Heads nod. The roadmap moves on.

But a few weeks later, usage is flat. Or worse, quietly declining. The product hasn’t failed loudly. It’s just… not being chosen.

That gap—between what a company buys into and what people actually use—feels like the real story behind many of the UX conversations happening right now. Especially as AI products accelerate, research gets automated, and “experience” is described more often than it’s examined.

What I’m seeing isn’t a lack of care or skill. It’s something subtler: we’re mistaking conceptual clarity for lived usefulness. And the cost shows up long after launch.

UX Isn’t a Definition Problem. It’s a Commitment Problem.

If you skim the last day of writing in our community, UX is everywhere. UX as a bridge. UX as feeling. UX beyond the screen. UX as strategy.

None of that is wrong. But I’m increasingly convinced the problem isn’t that we don’t know what UX is. It’s that we don’t agree on what we’re committing to when we say we care about it.

In practice, UX often gets treated as:

  • A quality the product has at launch
  • A standard met through usability testing
  • A layer applied once functionality is defined

But for users, UX is experienced as something else entirely:

A series of promises the product makes—and then keeps or breaks over time.

This is why so many well-designed products struggle after procurement or launch. Especially in enterprise and AI-heavy environments.

According to Gartner, nearly 30% of enterprise software licenses go unused in a given year. McKinsey reports that over 70% of AI initiatives stall before reaching scale. These aren’t engineering failures. They’re experience failures that unfold slowly, quietly, and rationally from the user’s point of view.

People don’t abandon products because they’re confusing in minute one. They leave because, by week three, the product has taught them it won’t reliably support the way they actually work.

The Hidden Distance Between Executives and Users

One of the most telling trends I saw this week was a piece on the “hidden crisis” in AI adoption—why users abandon what executives buy.

I’ve lived this dynamic from the inside.

A leadership team sees a demo that’s coherent, impressive, forward-looking. The product fits a strategic narrative: efficiency, leverage, scale. The buying decision is rational.

Then the product lands with the team expected to use it.

What they experience is different:

  • The AI assistant that almost understands their domain
  • The workflow that saves time on good days and costs time on bad ones
  • The system that requires careful prompting when they’re already overloaded

No one storms out. No one files a dramatic complaint. They just slowly return to their old tools.

As a product strategist, this is where I’ve learned to be very precise. Adoption isn’t a referendum on vision. It’s feedback on daily cost.

If using the product requires users to:

  • Translate their thinking into the system’s mental model
  • Monitor outputs more closely than advertised
  • Explain away failures to stakeholders

…then the experience debt compounds fast.

This is why UX can’t just be “how it feels.” It’s how much work it asks people to do to make it succeed.

When Research Gets Easier, Understanding Gets Riskier

Another thread gaining traction is AI-driven user research—voice agents conducting interviews, tools summarizing insights instantly, pattern detection at scale.

I tried one of these tools recently. The experience was polished. The questions were technically sound. The transcript was clean.

And yet, something important was missing.

The agent asked what I did. It didn’t notice when I hesitated.

It captured answers, but not stakes.

This is the tradeoff we’re not talking about enough. As research becomes more efficient, it becomes easier to miss the moments that actually shape good decisions.

In my own work, the most consequential insights rarely come from the answer itself. They come from:

  • The pause before someone answers
  • The story they tell instead of the one you asked for
  • The workaround they mention casually at the end

Those moments are hard to automate because they require judgment, not just collection.

This doesn’t mean AI research tools are bad. It means we have to be honest about what they’re good at:

  • Scaling directional understanding
  • Identifying recurring surface-level patterns
  • Reducing synthesis overhead

What they can’t replace is the human ability to sense when something doesn’t quite add up—and to stay with that discomfort long enough to learn from it.

The Difference Between What Users Say and What They Protect

One article this week framed the familiar tension well: users often ask for features that sound good but don’t reflect what they actually need.

I’d take that a step further.

In my experience, users will happily say they want many things. But they will fiercely protect only a few:

  • Their time
  • Their credibility
  • Their sense of control

When a product threatens any of those, no amount of feature parity saves it.

I once worked with a team building an internal platform for analysts. Research feedback was positive. Feature requests were plentiful. On paper, the roadmap was validated.

But adoption lagged.

When we finally sat with a few analysts for a full day—not an interview, just observation—we saw it. The product forced them to double-check outputs before sharing work. That extra verification step wasn’t visible in surveys. But it quietly threatened their credibility.

So they avoided the tool.

This is why I’m wary of treating user needs as a list to be uncovered. Needs reveal themselves through behavior, not articulation.

A few questions I’ve found more useful than “What do you want?”:

  1. What would you be nervous to rely on this for?
  2. When this fails, who does it fail in front of?
  3. What do you double-check—even when you trust the system?

These questions surface the real experience boundaries. The places where trust is provisional, not given.

Designing for Use, Not Approval

Across all these conversations—UX definitions, AI adoption, research tools—I see a shared risk.

We’re optimizing for approval moments instead of use moments.

Approval moments are when:

  • A stakeholder signs off
  • A usability score clears a threshold
  • A demo lands cleanly

Use moments are quieter:

  • When someone is tired and still chooses your product
  • When something goes wrong and they don’t panic
  • When the tool fades into the background instead of demanding attention

As product leaders, our leverage lives in designing for those quieter moments.

Practically, that has changed how I approach decisions:

  • I push teams to test longer arcs, not just first-use flows
  • I care less about whether users understand the feature and more about whether they trust it under pressure
  • I treat adoption data as a lagging indicator of experience debt

None of this fits neatly into a canvas or a metric. But it’s where product-market fit actually earns its keep.

What This All Asks of Us

The deeper implication of these trends isn’t methodological. It’s ethical.

If UX is a promise, then shipping is not the end of the work. It’s the moment the promise becomes testable.

AI will keep accelerating. Research will keep scaling. Tools will keep getting better at answering questions quickly.

Our responsibility is to stay accountable to the people living with the consequences of those answers.

That means designing products that don’t just make sense—but hold up. Products that respect users not as endpoints in a funnel, but as professionals, parents, teammates, humans trying to get through a day.

The gap between what we buy and what we use is where trust is won or lost.

And closing that gap isn’t about better definitions of UX.

It’s about keeping our promises, even when no one is watching.

Jordan Taylor
Jordan Taylor
Product Strategy Consultant

Jordan helps product teams navigate complexity and make better decisions. She's fascinated by how teams balance user needs, business goals, and technical constraints.

TOPICS

User ResearchProduct DesignUX ResearchProduct ManagementDecision-Making

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.

Why Users Abandon What Companies Buy