Flat Lines, Loud Signals: What Today’s Product Debates Are Really Asking Us to Notice
As AI reshapes how products are built and evaluated, familiar metrics are going flat while user signals grow louder. What do we trust when progress stops looking like progress?
The Moment the Dashboard Stops Making Sense
Last week, I was in a product review where no one spoke for a full thirty seconds.
The dashboard was up. Revenue flat. Activation flat. Usage flat. The kind of slide that usually triggers a reflexive response: we need to do something. But this time, someone broke the silence with a sentence that landed heavier than any metric on the screen:
“Our users keep telling us this is the first tool that actually fits how they work.”
No one argued. Because it was true. Support tickets were thoughtful, not frantic. Demos were turning into long conversations, not rushed walkthroughs. Churn hadn’t spiked. Nothing was wrong—and yet nothing was moving.
I’ve been seeing versions of this moment everywhere lately. In threads about AI “killing” B2B SaaS. In quiet debates about whether metrics still mean what we think they mean. In thoughtful pieces asking how to tell what users actually need when they don’t say it cleanly.
The deeper tension isn’t about AI, or features, or even growth. It’s about what we trust when the usual signals stop behaving the way they used to.
When Progress Stops Looking Like Progress
A lot of current anxiety in product circles starts with a simple observation: the old curves aren’t curving anymore.
In B2B SaaS especially, we’re used to reading progress through a familiar rhythm:
- Ship something meaningful
- See a measurable bump
- Decide what to double down on
But that rhythm is breaking down. Not because teams are worse—but because the environment has changed.
Two data points that keep coming up in my work:
- According to OpenView’s 2024 SaaS benchmarks, median net revenue retention for B2B SaaS has dropped below 100% for the first time in a decade. Not collapsing—just no longer compounding.
- At the same time, Gartner reports that over 70% of B2B buyers now use AI-assisted tools during evaluation, meaning their expectations are being shaped elsewhere, often outside your product entirely.
Flat metrics, in this context, don’t necessarily mean stagnation. They often mean signal lag.
When tools get easier to try, harder to differentiate, and faster to copy, the value users feel doesn’t always convert immediately into the behaviors our dashboards are tuned to detect.
That doesn’t make metrics irrelevant. It makes them incomplete.
The Risk of Overcorrecting
The teams I worry about most right now aren’t the ones with flat metrics.
They’re the ones who panic in response to them.
I’ve watched capable teams:
- Layer on AI features because “that’s where the market is going”
- Chase short-term activation spikes that erode long-term trust
- Redesign interfaces to look more impressive while becoming less comprehensible
All in service of getting the line to move again.
The uncomfortable truth: not all progress is immediately legible. Especially when users are still figuring out how a product fits into their real work.
The Quiet Gap Between Feedback and Evidence
One of the most honest questions circulating right now is deceptively simple: What do you do when users are happy but the numbers don’t move?
I’ve sat with that question as a product manager and as a consultant. It’s rarely answered by choosing sides.
User feedback isn’t truth. Metrics aren’t truth. They’re both proxies—and each has blind spots.
Here’s a pattern I’ve noticed across multiple B2B products in the last year:
- Early users articulate value in relational language (“It feels like this understands us”)
- Metrics track transactional behavior (logins, clicks, expansions)
- The translation between the two takes longer than our planning cycles allow
One example that’s stuck with me: a workflow tool for operations teams that introduced an AI-assisted planning layer. Usage didn’t spike. In fact, average session time decreased.
At first glance, that looked like disengagement.
But qualitative research showed something else: teams were making fewer manual adjustments. They trusted the output. They were spending less time wrestling with the system.
It took three months before retention reflected that trust. Six months before referrals picked up. Nearly a year before expansion revenue followed.
If the team had optimized for immediate engagement, they would have broken the very thing users valued.
What This Teaches Us About “Real” Need
A lot of recent writing asks how to uncover what users actually need, not just what they request. The missing piece is that needs often surface as reduced effort, not increased activity.
That creates a measurement problem.
When a product genuinely fits:
- People stop thinking about it
- Work feels smoother, not louder
- The absence of friction becomes the value
Our tools are still better at measuring presence than absence.
Designing for Context, Not Just Capability
Another thread running through current debates is frustration with static interfaces—especially in complex business software.
The critique is fair. Many systems still force users to adapt to rigid structures, even as we talk about personalization and intelligence.
But there’s a deeper design challenge hiding underneath: context isn’t just something you detect. It’s something you earn.
AI promises adaptive experiences, but adaptation without understanding can feel uncanny or even manipulative. I’ve seen products rush into “smart” behavior that users quickly disable because it acts without consent or clarity.
The teams navigating this well tend to do three things differently:
- They make system intent visible. Users can tell why something changed.
- They allow easy reversal. Intelligence that can’t be undone isn’t helpful—it’s controlling.
- They respect moments of uncertainty. Not every interaction needs optimization.
This matters because trust compounds quietly. A 2023 Salesforce study found that 94% of business users are more loyal to products that are transparent about how automation works, even if those products are less feature-rich.
In a market obsessed with capability, legibility is becoming a differentiator.
What Judgment Looks Like When There’s No Obvious Answer
All of these conversations—about AI, flat metrics, user need, static UI—converge on a single skill that’s suddenly in short supply: judgment under ambiguity.
Not decisiveness. Not confidence. Judgment.
Judgment is what allows a team to say:
- “These numbers are telling us something, but not everything.”
- “This feedback is real, even if it doesn’t map cleanly to our goals yet.”
- “We might need to wait—and watch—before acting.”
That’s uncomfortable in environments trained to reward motion.
From experience, here are a few practices that help teams hold that tension without freezing:
- Separate learning goals from performance goals. Not every cycle needs to move the business metric.
- Track leading indicators of trust. Depth of conversations, quality of referrals, reduction in workarounds.
- Name what you’re choosing not to optimize—yet. It keeps the decision intentional.
None of this is clean. That’s the point.
Staying With the Work When the Signals Are Mixed
I don’t think AI is killing B2B SaaS. I think it’s exposing how much of our confidence was borrowed from predictable feedback loops.
When those loops break, what’s left is the harder work: paying attention to people, not just patterns.
The teams that will endure this moment aren’t the loudest or the fastest. They’re the ones willing to sit with flat lines and ask better questions. To resist premature certainty. To design products that feel maintained, not just impressive.
Back in that review meeting, the team didn’t leave with a bold pivot or a flashy roadmap change. They left with a quieter commitment: to keep listening, to refine what was already working, and to give the product time to teach them how it wanted to grow.
That’s not a story dashboards celebrate.
But it’s often how meaningful products are built.
Jordan helps product teams navigate complexity and make better decisions. She's fascinated by how teams balance user needs, business goals, and technical constraints.