Designing the Controls, Not the Magic: What Today’s AI Debates Are Really Pointing Toward
As AI reshapes products, the real work isn’t making them smarter—it’s making their decisions visible, steerable, and humane.
The Moment the Demo Fell Apart
Last week, I watched a PM demo an AI-powered workflow to a room full of designers and researchers. The first two minutes were smooth—almost cinematic. A single prompt. A confident click. The system responded with a plan that looked, at first glance, impressively complete.
Then someone asked a simple question: “Why did it choose that?”
The room went quiet. The PM hovered over the interface, scanning for something—anything—that might explain the decision. There was nothing to point to. No trace of inputs, no visible constraints, no way to see what had been assumed versus what had been inferred. The magic trick worked, until it didn’t. And in that moment, you could feel trust drain out of the room.
That moment has stayed with me because it echoes so many of the conversations circulating right now. Agents replacing apps. SaaS being declared dead (again). Research automated into the background. Underneath the headlines, I’m seeing a quieter pattern emerge—one that has less to do with what we’re building, and more to do with what we’re asking people to take on faith.
From Conversations to Control Surfaces
One of the most thoughtful threads I’ve seen this week framed the next wave of agent UX not as chat—but as cockpits. Interfaces built around controls, constraints, and evidence. Places where people steer outcomes instead of politely asking for them.
That framing matters.
Chat-based interfaces are seductive because they feel human. But they also flatten complexity. Everything—intent, uncertainty, system state—gets squeezed into a conversational turn. As a designer, I’ve learned to be suspicious of anything that looks simple while hiding consequential decisions underneath.
When we design agents as cockpits, we’re making a different promise:
- You can see what the system is paying attention to
- You can adjust the boundaries it operates within
- You can inspect the evidence behind an outcome
This isn’t about power users versus novices. It’s about respecting that agency requires legibility.
There’s data backing this up. A 2024 Nielsen Norman Group study on AI-assisted tools found that users were 35% more likely to trust and reuse systems that exposed intermediate reasoning or adjustable parameters, even when outcomes were identical. Trust didn’t come from better answers—it came from better visibility.
As someone who’s spent years working on design systems, this feels familiar. Components aren’t just reusable UI—they’re agreements. About behavior. About states. About what happens when things go wrong. Cockpits are the same idea applied to intelligence.
When intelligence becomes a material, the interface is the mold.
The “SaaS Is Dead” Chorus—and What It’s Missing
Every few months, someone declares SaaS finished. AI is killing it. Replacing it. Eating it whole.
I don’t buy it. But I do think something is being named—clumsily.
What’s actually under pressure isn’t SaaS as a business model. It’s SaaS as a collection of opaque workflows held together by habit. Long-running products with years of UX debt, brittle permission models, and assumptions that only make sense if a human is clicking every button.
AI doesn’t kill those products. It exposes them.
I’ve seen this firsthand. Last year, we tried layering automation onto a mature B2B platform with a decade of accumulated interaction patterns. The agent technically worked—but it constantly tripped over edge cases created by old UI decisions:
- Hidden defaults no one remembered agreeing to
- Inconsistent terminology across features
- Critical actions buried three layers deep because “that’s how it’s always been”
The result wasn’t speed. It was anxiety.
According to Pendo’s 2025 Product Benchmarks, products with high UX debt see up to 2× the failure rate when introducing AI-driven features, largely due to misaligned mental models and unclear system state. That’s not an AI problem. That’s a design one.
This is where the cockpit idea becomes more than metaphor. Products that survive this transition will be the ones that:
- Surface system state clearly (what’s active, what’s inferred, what’s locked)
- Treat constraints as first-class UI rather than hidden logic
- Invest in ongoing UX maintenance, not one-time redesigns
SaaS isn’t dead. But the era of getting away with invisible complexity is ending.
Automation Is Fast. Understanding Is Still Fragile.
Another trend making the rounds: automated customer discovery. Research while you sleep. Fifteen hacks to replace interviews with workflows.
I understand the appeal. I really do. Research is time-consuming. It’s emotionally demanding. It doesn’t always fit neatly into sprint cycles.
But here’s the tension I keep coming back to: automation is excellent at collecting signals—and terrible at knowing which ones matter.
A real example. Earlier in my career, we ran an automated analysis across thousands of support tickets. The data was clear: a particular feature generated the highest volume of complaints. It jumped straight to the top of the backlog.
Then we talked to people.
What we learned in interviews was uncomfortable. The complaints weren’t about the feature itself—they were about the fear of using it incorrectly. People blamed the feature because it was the last visible step, not the root cause.
No dashboard told us that. A human did.
MIT’s CSAIL published a paper in late 2025 showing that teams relying solely on automated insight generation were 27% more likely to misattribute causality in complex user behaviors. The data wasn’t wrong. The interpretation was incomplete.
As designers and researchers, our job isn’t to defend manual work for its own sake. It’s to recognize where judgment can’t be outsourced.
Practical wisdom I’ve learned the hard way:
- Use automation to find patterns, not to finalize conclusions
- Treat synthesized insights as hypotheses, not answers
- Preserve spaces where someone has to sit with ambiguity
Understanding doesn’t scale linearly. It deepens unevenly.
Evidence, Accessibility, and the People in the System
One thread that keeps resurfacing—often quietly—is accessibility. Not as a checklist, but as a system requirement.
AI interfaces make this unavoidable. When decisions are inferred, when actions are taken on someone’s behalf, the cost of exclusion rises. If you can’t see the evidence, can’t adjust the controls, can’t understand what’s happening—you’re not just disadvantaged, you’re disempowered.
Designing cockpits forces a reckoning here. Evidence has to be perceivable. Controls have to be operable. Constraints have to be understandable.
This isn’t theoretical. The World Health Organization estimates that over 1 billion people live with some form of disability. When we design opaque systems, we’re not just creating bad UX—we’re deciding who gets to participate.
I’ve seen teams treat accessibility as something to “circle back to” after the AI model ships. That’s backwards. Once behavior is automated, retrofitting understanding is exponentially harder.
Accessibility isn’t an enhancement to intelligence. It’s how intelligence becomes usable.
What I’m Carrying Forward
Across all these conversations—agents, SaaS, automation, UX debt—I keep landing on the same underlying shift. We’re moving from designing features to designing relationships of control and trust.
That work is slower. More detailed. Less glamorous than demos that magically work. But it’s also the work that lasts.
If you’re building in this space, here’s what I’d gently encourage:
- Design the evidence before the outcome
- Make constraints visible, not implicit
- Treat UX debt as ongoing care, not cleanup
- Protect spaces for human judgment
I think back to that demo room, to the silence after the question that couldn’t be answered. The problem wasn’t that the system made a decision. It’s that no one could stand behind it.
Products don’t earn trust by being impressive. They earn it by being understandable. By letting people see the seams, adjust the controls, and feel—genuinely—that they are still part of the loop.
That’s the work I see emerging beneath the noise. And it’s work worth doing carefully.
Alex leads product design with a focus on creating experiences that feel intuitive and human. He's passionate about the craft of design and the details that make products feel right.