Just Because We Can Measure It Doesn’t Mean It Matters
We can measure brainwaves, optimize milliseconds, and keep dashboards green. But the signals that truly predict trust, loyalty, and long-term product health are often quieter — and harder to quantify.
A few days ago, I was on a call with a customer who sounded tired.
Not frustrated. Not angry. Just… tired.
“We’re hitting all our targets,” she said. “Adoption is up. Usage is up. But something feels off. People aren’t as excited anymore.”
On paper, her dashboard was beautiful. Activation rate up 12%. Feature engagement up 18% quarter over quarter. Support tickets down.
And yet renewal conversations were getting harder.
That same week, I read about a smart sleep mask broadcasting users’ brainwaves to an open server. I read about machine learning metrics that stay bright green while satisfaction quietly declines. I read about audiophiles who can’t reliably distinguish between copper cable and… mud.
Different stories. Same undercurrent.
We are living in a moment where we can measure almost anything. Transmit anything. Optimize anything.
The harder question is: do we know which signals actually matter?
The Seduction of Visible Signals
As product teams, we are drawn to what we can see.
Green dashboards. Real-time analytics. Heatmaps. Model accuracy scores. API latency. Engagement graphs that curve in the right direction.
They feel objective. Safe. Defensible.
In one widely cited study by the Nielsen Norman Group, teams that relied primarily on quantitative usability metrics missed up to 50% of critical usability issues that were only uncovered through qualitative observation. The numbers weren’t wrong — they were incomplete.
I’ve seen this firsthand.
At a previous company, we launched an AI-powered recommendation feature. We measured everything: click-through rate, time-on-page, conversion lift. The early data was promising — a 9% increase in engagement with suggested content.
Celebrations all around.
But in customer calls, something subtle kept surfacing:
- “It’s good… but sometimes it feels random.”
- “I don’t totally trust it yet.”
- “I still double-check manually.”
The model’s precision score was above 0.8. The engagement graph was trending upward.
But trust — the thing that determines whether someone will rely on a system long term — was fragile.
Metrics show you what people did. They don’t always tell you what they believed.
And belief is what shapes loyalty.
When “Green” Hides Drift
One of the articles circulating this week listed 12 ML metrics that can look healthy while real satisfaction quietly erodes.
That dynamic isn’t limited to AI.
In customer success, we see it in what I call “polite usage.”
Users log in. They complete workflows. They technically adopt the feature.
But their energy changes.
They stop bringing new ideas. They stop advocating internally. They stop inviting teammates in.
Nothing dramatic. Just a slow cooling.
According to Bain & Company, a 5% increase in customer retention can increase profits by 25% to 95%. That statistic gets quoted often. What’s less discussed is how subtle the early warning signs of churn are.
They rarely show up as a sudden drop in activity.
More often, they look like:
- Engagement without enthusiasm
- Adoption without advocacy
- Usage without emotional commitment
I once worked with a B2B team whose NPS score stayed steady at 42 for three quarters. “We’re fine,” the leadership team assumed.
But when we segmented responses by tenure, a pattern emerged. New customers were enthusiastic. Customers past the one-year mark were quietly declining in satisfaction.
The average hid the drift.
By the time renewal rates started slipping, the emotional exit had already happened.
The dashboards were green. The relationship was yellow.
The Difference Between Data and Discernment
The sleep mask broadcasting brainwaves struck me not because of the technical failure — though that matters deeply — but because of what it represents.
We now have the ability to capture extraordinarily intimate data: brainwaves, biometric signals, micro-interactions.
But capturing more data does not automatically create more understanding.
In fact, it can create the illusion of it.
As a Customer Success Lead, I sit in rooms where teams debate whether to track:
- Scroll depth to the pixel
- Cursor hover duration
- Sentiment shifts in support tickets
- Micro-conversion pathways within onboarding
All potentially useful.
But here’s the question I now ask more often:
If this number changes, will we know what to do differently?
If the answer is no, we are collecting signal without meaning.
There’s a cognitive bias at play here. Psychologists call it the “streetlight effect” — we look for answers where the light is brightest, not where the truth necessarily lives.
In product, the light is brightest in the dashboard.
Discernment is the quieter skill.
It’s knowing which signals are leading indicators of health and which are just activity.
It’s understanding that:
- A 2% drop in weekly active usage might matter less than a shift in tone during QBRs.
- A perfect model accuracy score might matter less than whether users feel in control.
- A beautifully optimized onboarding funnel might matter less than whether customers integrate the product into their identity and workflow.
Discernment doesn’t reject data.
It asks better questions of it.
Designing for What Actually Endures
Another conversation trending this week was about a company removing LLM-generated code after user criticism.
On the surface, that’s a technical decision.
But underneath, it’s about something more enduring: credibility.
Users don’t just evaluate what works. They evaluate how decisions are made.
Mastodon’s decentralized model is often cited as a technical architecture choice. But its real differentiator is philosophical: ownership and transparency as product values.
In both cases, what’s at stake isn’t performance.
It’s alignment.
In my experience, the products that endure pay attention to three layers of signal:
1. Behavioral Signal (What users do)
Adoption rates. Feature usage. Retention curves. These are necessary — they show traction.
2. Emotional Signal (How users feel)
Tone in support tickets. Energy in calls. The language customers use when describing you internally. This predicts advocacy.
3. Relational Signal (What users risk with you)
Are they consolidating tools around you? Bringing leadership into the conversation? Building internal workflows that depend on your product?
Relational signals are the hardest to quantify. But they’re the strongest predictors of durability.
When a customer restructures a team process around your product, they are investing political and operational capital. That matters more than a spike in weekly active users.
And you won’t see it in a standard dashboard.
The Discipline of Asking “Why This?”
The data engineering community is celebrating open, community-driven guides. AI tools are making semantic search feel magical. React is helping apps feel instant with optimistic UI.
These are meaningful advances.
But speed, intelligence, and instrumentation are not the same as wisdom.
The discipline I see separating strong teams from overwhelmed ones is simple and uncomfortable:
They repeatedly ask, “Why this signal?”
Not just:
- Can we track it?
- Can we improve it?
But:
- Does this reflect something fundamental about our user’s success?
- Would a change here alter their real-world outcome?
- If this went to zero, would anyone’s job meaningfully suffer?
One of our customers recently reduced their tracked KPIs from 27 to 9.
At first, it felt risky.
But those 9 were directly tied to user outcomes that mattered: time to complete core tasks, cross-team collaboration frequency, successful project handoffs.
Within two quarters, their internal decision-making sped up. Teams aligned faster. Conversations became less about defending numbers and more about solving real friction.
Focus didn’t reduce insight.
It sharpened it.
What I’m Noticing Beneath the Noise
Across these very different conversations — AI aesthetics, green metrics, exposed brainwaves, community-driven engineering — I see a shared tension:
We are incredibly good at generating signal.
We are still learning how to choose which ones deserve our attention.
As someone who spends most of her days listening to customers — not just to what they report, but to what they hesitate before saying — I’ve come to believe this:
The future advantage won’t belong to teams who measure the most.
It will belong to teams who interpret with care.
Who understand that:
- More intimate data increases responsibility, not just insight.
- Cleaner dashboards don’t guarantee stronger relationships.
- Performance without trust is fragile.
And who remember that behind every metric is a person making a small decision:
Do I rely on this? Do I recommend this? Do I build my work around this?
Those decisions don’t show up instantly.
They accumulate quietly.
And by the time the dashboard turns red, the emotional story has often been unfolding for months.
If there’s one habit I’d encourage us to build right now, it’s this:
When a metric moves — up or down — pause.
Ask what human behavior changed. Ask what belief shifted. Ask what risk someone just took — or stopped taking — with your product.
Because we can measure brainwaves. We can optimize milliseconds. We can make graphs glow green.
But the work that endures is still deeply human.
And not everything that matters fits neatly on a dashboard.
Jade leads all the Customer Success initiatives at Round Two. She is passionate about understanding the needs people have and how product collection tools like Round Two can help to generate more helpful insights.