Speed Is a Feature. Understanding Is a Practice.
AI can synthesize ten interviews in minutes. But speed doesn’t equal understanding. A reflection on judgment, research, and designing responsibly in an accelerated world.
Last week, I watched two product teams celebrate very different wins.
One had built a Claude agent that synthesized ten customer interviews in under five minutes. The Slack thread was full of rocket emojis and screenshots of beautifully structured summaries. "We just saved a day of work," someone wrote.
The other team had embedded a lightweight research loop directly into a live redesign for a platform with over 3 million users. No fireworks. Just a quiet note in the sprint recap: "Changed the primary CTA after observing hesitation in first-time sessions. Conversion up 4.2% this week."
Both teams were solving real problems. Both were pushing the craft forward. But the contrast stayed with me. We are getting incredibly good at speeding up the artifacts of research — transcripts, summaries, dashboards. The harder question is whether we’re also getting better at the practice of understanding.
As a design lead, I’m not anti-automation. I love a well-built system. I care deeply about removing friction from workflows. But I also know this: the work of design isn’t just about processing information. It’s about developing judgment. And judgment doesn’t come from speed alone.
The Compression of Insight
Let’s start with what’s genuinely exciting.
Tools that can synthesize ten interviews in minutes are not trivial. According to a 2024 survey by the Nielsen Norman Group, research teams report spending up to 40% of their project time on synthesis and reporting. That’s real cost — not just in hours, but in momentum. When insights sit in folders waiting to be summarized, they decay.
Automation can:
- Identify recurring themes across transcripts
- Surface sentiment patterns
- Highlight contradictions or outliers
- Generate first-pass summaries that make sharing easier
These are meaningful gains. They reduce the administrative weight of research. They make knowledge more portable.
But synthesis is not the same as sense-making.
A language model can cluster statements about "confusion during onboarding." It can’t feel the tension in a participant’s voice when they say, "I guess I’d just give up." It doesn’t experience the long pause before someone clicks the wrong button — again.
In design, the texture of insight matters. Not just what was said, but how it was said. Not just frequency, but intensity. Not just patterns, but stakes.
When we compress research into faster summaries, we have to be careful not to compress the meaning along with it.
The Rise of Real-Time Research
What struck me about the team embedding research into a live redesign wasn’t the 4.2% conversion lift. It was how they structured their decision-making.
They didn’t rely solely on A/B testing. They paired behavioral data with rapid, lightweight observation:
- 20-minute usability sessions with first-time users each week
- Heatmaps and session replays to identify hesitation points
- Micro-surveys triggered after key actions
This wasn’t "big research." It was continuous attention.
We’ve spent years treating research as a phase: discover, synthesize, hand off. Increasingly, I’m seeing teams treat it as infrastructure — something that runs alongside shipping, not before it.
And here’s the interesting tension: AI accelerates the analysis of inputs, but it doesn’t replace the need for intentional research design.
If anything, it raises the bar.
When it becomes easy to process large volumes of data, the differentiator is no longer "who can analyze fastest." It’s:
- Who asked the right questions?
- Who recruited the right participants?
- Who noticed the signal that wasn’t obvious?
Speed makes volume possible. Judgment makes volume useful.
The Bot Is the Easy Part
Several conversations this week circled around a similar idea: building the AI-powered research bot is straightforward. The hard part is what happens after.
How do insights move?
I’ve worked on teams where beautifully synthesized research sat untouched in Notion. Not because it wasn’t valuable. Because no one owned the translation from "interesting" to "actionable."
Knowledge mobilization — actually getting insight into design decisions — requires:
- Clear ownership: Who is responsible for turning this into a change?
- Timing: When does this input influence roadmap decisions?
- Context: What trade-offs are we willing to make based on it?
In one project, we used AI to cluster over 600 pieces of user feedback from support tickets. The themes were solid. But the real breakthrough came when we printed the top 20 quotes and taped them to the design wall.
Something changed when engineers walked by and read:
"I feel stupid every time I try to use this."
No dashboard metric carried that same weight.
The bot made the pattern visible. The human moment made it matter.
As designers, we need both. But we shouldn’t confuse the two.
When Infrastructure Meets Intimacy
Another thread this week touched healthcare standards like FHIR — invisible infrastructure shaping real patient experiences. It reminded me that product decisions often operate at two levels simultaneously:
- Structural: APIs, data models, performance, compliance
- Experiential: Trust, clarity, emotional safety
AI tooling lives mostly in the structural layer. It optimizes workflows, accelerates processes, reduces manual effort.
But the outcomes we care about — confidence, trust, relief — live in the experiential layer.
Here’s where I think we need to be more explicit as a community:
Automation should protect time for deeper thinking. Not replace it.
If AI synthesis saves a day, what do we do with that day?
- Run an extra usability session?
- Refine interaction details that reduce cognitive load?
- Revisit edge cases for accessibility?
Or do we simply start the next project sooner?
As someone who obsesses over interaction details — the spacing of form fields, the language of error states, the affordance of a primary button — I worry that we’ll use speed to scale output instead of depth.
And depth is where differentiation lives.
The New Skill: Designing With Acceleration
We’re entering a phase where acceleration is assumed. AI summaries, automated clustering, real-time dashboards — these will become table stakes.
The emerging craft isn’t "how to use AI." It’s how to design responsibly in a world where everything moves faster.
From what I’m seeing, that craft has three parts.
1. Curate the Input
When analysis is cheap, input quality matters more.
Be rigorous about:
- Who you recruit (are they actually your target users?)
- How you frame questions (are you leading?)
- What context you capture (environment, constraints, emotion)
Garbage in, beautifully synthesized garbage out.
2. Stay Close to Raw Material
Even if an agent summarizes ten interviews, watch at least parts of two.
Read full transcripts occasionally.
Listen for tone.
There is a qualitative difference between "Users struggle with navigation" and hearing someone say, with a sigh, "I just don’t know where to look."
One is information. The other is empathy.
3. Slow Down at the Decision Point
Ironically, the faster our inputs, the more intentional our decisions must be.
Before acting on AI-generated themes, ask:
- What assumptions are we making about causality?
- What might be underrepresented in this dataset?
- What would disconfirm this insight?
This isn’t about distrust. It’s about craft.
Design has always required holding ambiguity. Faster synthesis doesn’t eliminate ambiguity — it just packages it more neatly.
What We’re Actually Building
There was a Hacker News thread this week about a UUID v4 collision — an event so statistically unlikely it feels almost theoretical. And yet, it happened.
It’s a good metaphor for this moment.
We design systems assuming probabilities. We automate based on patterns. We optimize for what happens most of the time.
But users live in the exceptions.
The edge case where the form fails. The rare configuration that breaks the flow. The moment when someone is tired, anxious, or in a hurry.
AI helps us see the common patterns faster. It does not absolve us from caring about the outliers.
And often, it’s in the outliers that the most humane design decisions are made.
I don’t think the future of product work is slower. It won’t be. The economic and competitive pressures are real. Teams that can synthesize faster and experiment more efficiently will ship more.
But I do think the future of good product work will depend on a quiet discipline: protecting space for understanding in a culture of acceleration.
Because speed is a feature. It improves workflows. It reduces friction. It unlocks scale.
Understanding is a practice. It requires attention, humility, and time — even if that time is now more intentionally chosen.
As designers, we don’t just build interfaces or systems. We shape how organizations pay attention.
If we let automation dictate the tempo entirely, we risk losing the very thing that makes our work matter.
But if we use acceleration to clear space for better questions, sharper judgment, and more human decisions — then this moment becomes less about replacing thinking, and more about refining it.
That’s the craft I’m interested in.
Not how fast we can summarize ten interviews.
But how wisely we act on what they’re trying to tell us.
Alex leads product design with a focus on creating experiences that feel intuitive and human. He's passionate about the craft of design and the details that make products feel right.