When Products Get Smarter, People Don’t Get Calmer
Back to Blog

When Products Get Smarter, People Don’t Get Calmer

As AI features multiply, products are quietly changing their emotional contract with users. What I’m seeing in research sessions suggests intelligence without empathy often shows up as pressure — especially under stress.

Maya ChenMaya Chen
7 min read

The moment the room changed

Last Tuesday, I was watching a remote usability session with a procurement manager named Luis. He’d been using the same B2B platform for six years. Confident. Fast. Almost bored. Then a new AI panel slid in from the right.

It started suggesting next steps.

Luis stopped mid-sentence. His shoulders rose. He laughed — the quick, polite kind — and said, “Oh. It’s thinking now.”

Nothing was broken. The suggestions were technically good. But the energy in the room shifted. What had been fluent became careful. He started rereading things he normally skimmed. His mouse slowed. Ten minutes later he told us, quietly, that he’d probably turn the feature off “once this is out of beta.”

That moment has been sticking with me as I’ve watched the last day of conversations ripple through the product community — debates about whether every product needs AI features, think pieces about empathy as a competitive advantage, warnings that judgment has gotten heavier even as workflows get faster. On the surface, these feel like separate threads. Underneath, they’re circling the same tension.

As our products get smarter, we are designing into more emotionally charged moments — often without realizing it.

And intelligence, when it shows up at the wrong time or in the wrong tone, doesn’t feel helpful. It feels like pressure.

Intelligence changes the emotional contract

When teams talk about adding AI — chatbots, predictive analytics, content generation — the conversation usually starts with capability. What can it do now? What could it automate? The underlying assumption is that more intelligence equals more value.

But in research, I’m seeing something subtler.

Intelligence changes the emotional contract between a person and a product. It shifts the relationship from tool → collaborator. And collaboration, even with software, comes with social and psychological weight.

A few patterns I’ve noticed across studies this year:

  • Users become more self-conscious when systems appear to “judge” or “anticipate” them.
  • Mistakes feel heavier when an intelligent system witnesses them.
  • Silence or hesitation increases when people aren’t sure why a system is recommending something.

This isn’t theoretical. In a 2024 survey by the Nielsen Norman Group, 42% of users reported feeling less confident in their own decisions when AI suggestions were present, even when those suggestions improved task success. Success went up. Comfort went down.

That tradeoff matters.

Because confidence isn’t just a feeling — it’s a prerequisite for action. Especially in high-stakes or stressful contexts.

Stress is where products reveal their values

There’s a line I keep coming back to from a recent article: Most products look great when users are calm.

Stress is the real test.

In healthcare tools, stress shows up when time is short. In finance products, it appears when money is on the line. In workplace software, it surfaces when accountability is unclear or consequences feel personal.

In those moments, users don’t want more options. They want orientation.

In one study I ran with an operations team last fall, we introduced an AI-generated “recommended action” during incident response. On paper, it reduced resolution time by 18%. Leadership loved it.

But when we interviewed responders afterward, a different story emerged.

“I spent the first minute just trying to decide whether I was allowed to disagree with it.”

That minute wasn’t captured in the metrics. But emotionally, it was everything.

Designing for stress means understanding:

  • Cognitive load spikes before task load does
  • People default to risk-avoidance when they feel observed
  • Ambiguity feels more threatening when a system appears confident

AI often amplifies all three.

Empathy isn’t a feature — it’s restraint

There’s been a resurgence of writing about empathy as the “secret” behind great products. I understand the impulse. Empathy feels like a counterweight to automation. A reminder that humans still matter.

But empathy isn’t something you bolt on after the roadmap is done.

In practice, empathy shows up as restraint.

It’s the decision not to surface a recommendation until someone asks.

It’s choosing language that leaves room for disagreement.

It’s acknowledging uncertainty instead of masking it with confidence.

One of the most effective AI interactions I’ve seen recently was in a legal research tool. Instead of saying, “Here’s the best precedent,” it said:

“Here are three patterns I’m noticing. Tell me which direction you’re considering.”

Usage was slower. Satisfaction scores were higher. Trust increased over time.

This aligns with what behavioral psychology has shown for decades: people are more comfortable with guidance when they feel agency is preserved. A 2023 Stanford study found that decision-support systems framed as “options” rather than “answers” led to 27% higher long-term adoption.

Empathy here isn’t about being nice. It’s about respecting the user’s role in the decision.

Judgment is the new bottleneck

There’s a quiet undercurrent running through many of these discussions: If AI can do so much, what’s left for us?

From what I’m seeing, what’s left is judgment — and it’s heavier than ever.

Design tools are faster. Research synthesis is accelerated. Prototypes multiply overnight. But someone still has to decide:

  • When intelligence should appear
  • How certain it should sound
  • Who bears responsibility when it’s wrong

Those aren’t technical questions. They’re human ones.

In one product review recently, a team proudly showed how their system could auto-resolve 70% of support tickets. Then a researcher asked, “What happens in the 30%?”

Silence.

The answer wasn’t that they didn’t know. It was that they hadn’t sat with the human cost of those failures yet.

Judgment means staying present for that discomfort. Not outsourcing it to metrics.

What I’m learning to look for now

As I follow these conversations — and sit in more sessions like the one with Luis — I’m adjusting what I pay attention to.

Here are a few signals I’ve learned not to ignore:

  1. Hesitation after automation

    • If users slow down after a “smart” feature appears, something emotional is happening.
  2. Polite compliance

    • When people follow recommendations without enthusiasm, they may be protecting themselves.
  3. Requests to turn things off

    • Opt-outs aren’t rejection. They’re information about timing and trust.
  4. Language shifts under stress

    • Listen for “I guess,” “probably,” “it wants me to…” — these are clues about agency.

None of these show up cleanly on dashboards. All of them shape whether a product becomes part of someone’s working life or something they tolerate.

Coming back to the room

I keep thinking about Luis and that small laugh when the AI panel appeared. It wasn’t delight or fear. It was recognition.

He knew, instinctively, that the relationship had changed.

As product designers and researchers, we’re being asked to design not just interactions, but relationships under pressure. Intelligence raises the stakes. Empathy sets the tone. Judgment decides the outcome.

The question isn’t whether our products should get smarter.

It’s whether they know when to be quiet.

And whether we’re willing to sit with users long enough — especially when they’re stressed — to learn the difference.

That work is slower. Harder to headline. Less impressive in demos.

But it’s where trust actually forms.

And trust, once broken by a system that tried too hard to be helpful, is very hard to automate back.

Maya Chen
Maya Chen
Senior UX Researcher

Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.

TOPICS

User ResearchProduct DesignUX ResearchAI in UXDesign Thinking

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.

When Products Get Smarter, People Don’t Get Calmer