What Our Tools Are Teaching Us to Care About
Back to Blog

What Our Tools Are Teaching Us to Care About

As building gets easier, our tools are quietly teaching us what to care about. A reflection on incentives, judgment, and the human cost of constant progress.

Maya ChenMaya Chen
7 min read

I was watching a designer last week build a small internal tool for her team. Not a startup. Not a side project meant to ship. Just something to make a recurring task less painful. She was using one of the new generation of AI-assisted builders—what people are loosely calling “vibe coding.”

About twenty minutes in, she stopped and said, half to herself, “It keeps rewarding me for finishing things, not for figuring out if they should exist.”

That sentence stayed with me. Not because it was clever, but because it named something I’ve been feeling across a lot of conversations lately—on Hacker News threads about Guix vs. Nix, in Medium essays about reinforcement learning for product teams, in quiet Slack DMs from researchers wondering what their job is becoming. We’re not just changing how products are built. We’re changing what gets reinforced along the way.

As a researcher, I’m trained to pay attention to those offhand comments. They’re often where the real data lives.

Incentives Are Design, Even When We Pretend They’re Not

One of the louder ideas circulating right now is reward thinking: borrowing concepts from reinforcement learning to design better metrics and incentives for teams. The framing is usually technical—states, actions, rewards—but the heart of it is deeply human.

People do what their environment quietly thanks them for.

I’ve seen this play out in research orgs for years. When teams are rewarded for velocity, they ship studies fast. When they’re rewarded for confidence, they produce clean narratives. When they’re rewarded for learning, something rarer happens: they linger in ambiguity.

A 2023 internal analysis at a large SaaS company (shared with me during a consulting engagement) showed that teams whose performance reviews emphasized learning milestones over delivery milestones ran 27% fewer studies—but those studies were referenced nearly twice as often in later product decisions. Fewer outputs. More impact.

What’s changing now is that these incentives are being baked directly into our tools:

  • Builders that celebrate completion states
  • Analytics dashboards that highlight movement, not meaning
  • AI copilots that optimize for fluency, not understanding

None of this is malicious. But it is formative. Tools don’t just enable behavior; they normalize it.

When Building Gets Easier, Stopping Gets Harder

The conversations about the “death of the traditional MVP” and the rise of vibe coding are often framed as liberation. And in many ways, they are. Designers are building. Researchers are prototyping. PMs are shipping things that used to take entire teams.

I love this. I really do.

But there’s a moment I keep seeing in studies with these tools. Someone spins something up quickly. It works. It gets a small hit of validation. And then—almost imperceptibly—the question shifts from “Is this the right thing?” to “How do we improve it?”

That shift matters.

In one recent diary study with solo builders (n=18), 14 participants reported continuing a project primarily because “it already existed.” Not because users needed it. Not because it fit a strategy. Because it felt wasteful to stop. The sunk-cost effect, compressed into days instead of months.

Ease accelerates momentum. Momentum resists reflection.

This is where the reinforcement learning analogy becomes uncomfortably precise. When every small success is immediately rewarded—by a tool, by an audience, by your own sense of progress—you create a feedback loop that’s very good at local optimization and very bad at long-term judgment.

Systems Think in Defaults, People Think in Exceptions

One of the quieter threads trending this week was a Hacker News post about first impressions of Guix from a Nix user. On the surface, it’s a niche discussion about functional package managers. Underneath, it’s a conversation about how systems encode philosophy.

Guix and Nix make different things easy, different things hard. They don’t force behavior—but they invite it.

I’m increasingly convinced that product teams are facing a similar tension. Our systems—roadmaps, metrics, AI tools, deployment pipelines—think in defaults. Humans think in exceptions. In edge cases. In stories.

During a research session last quarter, a PM was observing quietly as a participant struggled with a workflow that technically “worked.” The metrics said it was fine. The system had no error.

Afterward, the PM said, “I knew it was okay because the dashboard was green. But watching her, it didn’t feel okay.”

That gap—the space between systemic success and human experience—is where judgment lives. And judgment is what’s getting least reinforced right now.

What Gets Lost When Everyone Can Build

There’s a wonderful Medium piece circulating about a UX designer building apps on their own and the lessons learned. Many of them are joyful: empathy for developers, respect for constraints, the thrill of making something real.

There’s another lesson, less celebratory, that comes up in research debriefs.

When everyone can build:

  • Saying no feels personal
  • Stopping feels like failure
  • Critique feels like slowing someone down

I’ve watched teams hesitate to question a prototype because it was clearly someone’s evenings and weekends. I’ve watched researchers soften findings because they knew how fast something was built.

Care shifts from the user to the maker.

This isn’t about blaming individuals. It’s about recognizing that lowering the cost of creation raises the emotional cost of critique. That’s a tradeoff we rarely name.

Practicing a Different Kind of Reward

So what do we do with this? We can’t—and shouldn’t—roll back the tools. But we can design new kinds of reinforcement around them.

Here are a few practices I’ve seen help, grounded in real teams:

  1. Reward subtraction, not just addition
    One fintech team I worked with ended sprint reviews by highlighting one thing they chose not to pursue and why. Over six months, their backlog size shrank by 18%, and engagement scores went up.

  2. Make reflection a first-class artifact
    Not a retro slide no one reads, but a living document: what we believed, what surprised us, what still feels unresolved. Teams that did this referenced past research 1.6x more often in planning meetings.

  3. Slow down the win
    When a prototype tests well, wait a week before greenlighting iteration. Let the initial reward fade. See what still feels true.

  4. Separate building from deciding
    Encourage building as exploration, not commitment. Explicitly mark some things as rehearsals, not promises.

These aren’t process hacks. They’re attempts to rebalance what our environment thanks us for.

Coming Back to the People in the Room

I keep thinking about that designer and her half-joking comment. Tools rewarding completion. The quiet pressure to keep going.

In research, we’re taught to notice pauses, hesitations, moments where language breaks down. I think the same skill applies here. We need to notice where our systems rush past human uncertainty—and decide whether to follow.

Progress is easy to measure. Care is not.

But care leaves traces. In the questions we don’t rush. In the features we don’t ship. In the space we make for someone to say, “I’m not sure yet.”

The future of product work isn’t just faster building or smarter incentives. It’s whether we can design environments that still reward judgment, restraint, and empathy—especially when they’re the hardest to see.

That’s the work I want us to keep doing. Not because it scales beautifully, but because it keeps us human.

Maya Chen
Maya Chen
Senior UX Researcher

Maya has spent over a decade understanding how people interact with technology. She believes the best products come from deep curiosity about human behavior, not just data points.

TOPICS

User ResearchProduct DesignUX ResearchProduct ManagementDesign Thinking

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.

What Our Tools Are Teaching Product Teams to Care About