What Our Products Reveal About the Way We Work
Back to Blog

What Our Products Reveal About the Way We Work

Our interfaces don’t just reflect user needs—they reflect our org charts, incentives, and metrics. A look at what customer conversations reveal about how we really build.

Jade LiangJade Liang
9 min read

Last week, I was on a call with a long-time customer who sounded more tired than frustrated.

They weren’t angry about a missing feature. They weren’t asking for a discount. They were trying to explain why their team had quietly stopped using one part of our platform.

“It just feels like it wasn’t built for how we actually work,” they said.

I’ve heard versions of that sentence in construction tech conversations, in AI tooling debates, in early-stage SaaS communities where founders are wondering why they have zero users after months of building. And I’ve been thinking about what ties all of these threads together.

We talk a lot about user behavior analytics, guardrails, roadmaps, metrics trees. We debate whether multimodal AI will replace screens and whether UX will define retention in 2026. But beneath all of that is something more fundamental:

Our products are reflections of how our teams think, decide, and measure success.

And users can feel it immediately.

As someone who sits between product teams and customers every day, I’ve learned this the hard way. When something feels off in the experience, it’s rarely just a design issue. It’s usually an organizational one.

Let me explain.

The Org Chart Is Always in the Interface

One of the most shared ideas this week was that "the org chart shows up in your interface." I’ve never seen that proven wrong.

In customer success, you see this play out in subtle ways:

  • A workflow that forces users to switch between three different modules because three different teams own them.
  • A reporting dashboard that answers the KPIs the business tracks internally—but not the questions customers actually need answered.
  • An AI feature with strict guardrails that reflect legal anxiety more than real-world usage patterns.

Conway’s Law has been around for decades, and the data backs it up. Studies from MIT’s Center for Information Systems Research have shown that companies with highly fragmented internal structures are significantly more likely to ship fragmented user experiences. The correlation isn’t abstract. It’s visible.

I once worked with a B2B customer in construction tech—a field currently having its own UX reckoning. Their site managers were technically “adopting” the software. Logins looked fine. But the field teams kept reverting to spreadsheets and WhatsApp threads.

When we dug in, the issue wasn’t missing features. It was flow.

The product mirrored the vendor’s internal structure:

  • Procurement owned onboarding.
  • Operations owned task management.
  • Finance owned reporting.

So the interface segmented everything the same way.

But on-site? A foreman doesn’t think in departments. They think in problems that need solving before 4 PM.

The friction wasn’t visual. It was cognitive. The software required users to adopt the company’s internal logic.

And most people won’t do that for long.

The Metrics We Choose Shape the Experience We Ship

Another conversation making the rounds this week: stop tracking 100 metrics. Build a metrics tree.

I couldn’t agree more—but I’d add something.

The structure of your metrics is the structure of your product’s priorities.

In customer success, we live in the tension between leading indicators and lived experience.

A product team might celebrate:

  • Increased feature adoption (+18% quarter-over-quarter)
  • Higher session time
  • More AI prompt usage per account

Meanwhile, my inbox might be filling with quieter signals:

  • “This takes longer than it used to.”
  • “We’re not sure when to use this.”
  • “We feel like we’re clicking more but getting less.”

Individually, those emails are anecdotes. Collectively, they’re a pattern.

Gartner reported that in 2023, 80% of B2B buyers said their last purchase involved “high or very high levels of complexity.” Complexity is rarely about features alone. It’s about how many mental translations a user must perform to get value.

If your metrics tree prioritizes depth of engagement over clarity of outcome, your interface will drift toward more actions, more prompts, more surface area.

And here’s the uncomfortable part: sometimes the roadmap looks healthy precisely because the metrics are misaligned.

Which brings me to the roadmap conversation.

When the Roadmap Feels Certain but the Experience Feels Off

“There’s something comforting about a detailed roadmap.” That line has been echoing in my head.

Roadmaps reduce internal anxiety. They create the illusion of control. They reassure investors. They align departments.

But customers don’t experience your roadmap. They experience your product.

In the past year, I’ve sat with at least five customers who told me some version of this:

“We see you shipping a lot. We’re just not sure it’s helping us.”

That sentence is brutal because it’s rarely about effort. It’s about orientation.

When you analyze user behavior in Android, or instrument screen time analytics, or deploy AI guardrails to prevent jailbreaks, you’re making decisions about what matters.

But if those decisions are made without tight feedback loops to actual workflows, you risk building protective systems and performance dashboards that optimize for the wrong layer.

I’ve seen teams spend months refining AI safety guardrails—testing edge cases, patching prompt injection vectors—while users struggled with something much simpler:

They didn’t know when to trust the output.

No jailbreak required. Just ambiguity.

In one case, adding a simple confidence indicator and usage guidance reduced support tickets about "incorrect AI outputs" by 27% in two months. The model didn’t change. The clarity did.

That insight didn’t come from telemetry alone. It came from listening to customer calls and noticing the language they used: “I don’t know if I should rely on this.”

The roadmap didn’t originally include that fix. Feedback reshaped it.

Zero Users Isn’t a Marketing Problem (Usually)

I’ve also been following the wave of posts from founders who’ve spent months building only to launch to silence.

I feel for them deeply.

In customer success, we sometimes inherit those situations after launch—when growth stalls and retention becomes the emergency.

There’s a pattern I’ve noticed:

Many of these products are technically impressive. Full-stack builds. Clean UI. AI integrations. Beautiful landing pages optimized for SEO.

But when we speak to early adopters, the story shifts.

They say things like:

  • “It’s interesting, but I’m not sure when I’d use it.”
  • “It solves part of my problem, not the whole thing.”
  • “It feels like it was built by someone who understands the tech, not my day.”

That last one stings.

CB Insights consistently reports that around 35% of startups fail because there is no market need. But “no need” is often shorthand for something more nuanced: the product didn’t map cleanly onto a real workflow.

When teams build primarily from internal logic—what’s possible, what’s elegant, what’s exciting—they risk missing the lived sequence of tasks, constraints, and trade-offs their users navigate.

Behavior analytics can tell you where users drop off.

Conversations tell you why the task felt misaligned in the first place.

And that difference matters.

What I’ve Learned Sitting in the Middle

As a Customer Success Lead, my job is often described as “driving adoption” or “reducing churn.” But in reality, it’s translation work.

I translate:

  • What customers are trying to accomplish → into product insights.
  • What product is shipping → into practical workflows.
  • What metrics show → into stories that engineers and designers can act on.

Over time, I’ve come to believe a few things very deeply:

1. Users experience your internal trade-offs.

If legal fear outweighs usability, the product feels restrictive.

If growth pressure outweighs clarity, the product feels noisy.

If engineering elegance outweighs workflow fit, the product feels impressive but distant.

2. Analytics are maps, not terrain.

Screen time, click paths, retention curves—these are essential. But they’re abstractions.

The terrain is the real human trying to get through their day.

When analytics and conversation disagree, I’ve learned to pause.

That pause often reveals the structural issue.

3. The fastest way to improve retention is to improve alignment.

Not more features. Not louder marketing. Not tighter guardrails alone.

Alignment between:

  • What the user is trying to accomplish
  • How your organization thinks about the problem
  • What your metrics reward

When those three line up, adoption feels natural. When they don’t, it feels like pushing uphill.

And users will only push uphill for so long.

Designing With Organizational Self-Awareness

So what do we do with this insight?

The answer isn’t to flatten every org chart or abandon roadmaps or stop tracking metrics.

It’s to build organizational self-awareness into the product process.

A few practices I’ve seen make a real difference:

  1. Bring Customer Success into roadmap debates early. Not for feature requests—but for pattern recognition.
  2. Map workflows before mapping screens. Ask: what does a user’s day actually look like from 9 AM to 5 PM?
  3. Audit your metrics tree annually. Which metrics exist because they’re easy to measure? Which exist because they represent real user progress?
  4. Test guardrails against normal behavior, not just malicious edge cases. Most friction comes from everyday use.
  5. Listen for emotional language in feedback. Words like “confusing,” “heavy,” “uncertain” are signals that structure, not styling, needs attention.

None of these are revolutionary. But they require humility.

They require admitting that the product isn’t just a solution to a user problem.

It’s also a mirror.

The Deeper Question

When I step back from this week’s conversations—about UX being the real differentiator, about metrics discipline, about safety systems that actually reduce harm—I see a shared undercurrent.

We’re wrestling with the consequences of building at scale.

The tools are powerful. The velocity is high. The analytics are precise.

But precision doesn’t equal alignment.

And alignment doesn’t happen by accident.

It happens when we’re willing to look at our own structures—our incentives, our org charts, our dashboards—and ask whether they’re quietly shaping an experience that makes sense only from the inside.

Customers don’t see our internal debates.

They see the interface. They feel the friction. They decide whether to stay.

If we’re honest, the product always tells on us.

The real question is whether we’re listening closely enough to what it’s revealing—not just about our users, but about ourselves.

Jade Liang
Jade Liang
Customer Succes Lead

Jade leads all the Customer Success initiatives at Round Two. She is passionate about understanding the needs people have and how product collection tools like Round Two can help to generate more helpful insights.

TOPICS

User ResearchProduct DesignCustomer ExperienceProduct StrategyUX

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.

What Your Product Reveals About Your Team