The Direction Gap: Why More Intelligence Isn’t Solving Our Product Confusion
Back to Blog

The Direction Gap: Why More Intelligence Isn’t Solving Our Product Confusion

We have more product intelligence than ever—AI agents, synthetic users, dozens of metrics. So why does clarity feel harder? A look at the real gap underneath.

Jordan TaylorJordan Taylor
9 min read

Last week, I sat in on two very different product conversations.

In the first, a team was debating which AI agents they should spin up to summarize user feedback, monitor competitors, and draft roadmap proposals. In the second, a founder was venting about how their dashboard had ballooned to 47 metrics—and they still couldn’t answer a simple question from their board: Are we actually winning?

Different companies. Different stages. Same undercurrent.

We have more intelligence than ever—AI agents parsing feedback, dashboards tracking every click, synthetic users simulating behavior. And yet, the conversations I’m hearing feel less certain, not more. We’re surrounded by signals. But clarity? That’s getting harder.

After years working with product teams navigating growth, plateaus, and reinvention, I’ve started to see the pattern more clearly:

We don’t have an information problem. We have a direction problem.

And more intelligence—human or artificial—doesn’t fix that by default.

When Intelligence Multiplies, So Does Avoidance

AI agents for product management are having a moment. And I understand why.

Product work is cognitively heavy. We’re synthesizing:

  • Thousands of support tickets
  • Usage data across segments
  • Competitive shifts
  • Stakeholder demands
  • Market narratives about “what’s next”

If an agent can cluster feedback, detect churn risks, or auto-generate roadmap drafts, that’s not trivial. That’s real leverage.

But here’s what I’ve noticed in practice: teams often reach for agents at the exact moment they’re struggling to make a hard call.

I worked with a B2B SaaS company last year that had plateaued at $8M ARR. They had strong retention (low churn, around 5% annually), decent acquisition, and a backlog of enterprise feature requests. Instead of deciding whether they were becoming an enterprise product or doubling down on mid-market simplicity, they built a sophisticated internal system to analyze every piece of customer feedback.

They could now:

  • Automatically tag themes
  • Quantify feature mentions
  • Track sentiment by segment

What they couldn’t do was answer this question: Which customer do we want to be famous for?

The system produced better summaries. It didn’t produce conviction.

Intelligence scales what you value. If you value clarity, it sharpens it. If you value optionality, it amplifies ambiguity.

The Metric Explosion (and the Direction Vacuum)

The “47 metrics and zero direction” conversation resonated for a reason.

According to Amplitude’s 2023 Product Report, the average product team tracks between 20 and 40 metrics regularly. Larger orgs track significantly more. But when asked to name their single most important metric, far fewer teams can answer clearly—and even fewer align their roadmap decisions to it.

I’ve seen this up close. A growth-stage company I advised had:

  • Activation rate
  • 7-day retention
  • 30-day retention
  • DAU/WAU ratio
  • Feature adoption rates for 12 major features
  • NPS
  • Expansion revenue
  • Churn by cohort

All valuable. All defensible.

In a quarterly planning meeting, the head of product presented a roadmap with twelve “Priority 1” initiatives. Each could be justified by a metric.

But here’s the uncomfortable truth: when every initiative ties to a different metric, you don’t have a strategy. You have coverage.

Metrics are mirrors. Strategy is a choice.

A North Star metric is powerful not because it’s mathematically elegant, but because it forces tradeoffs. If your North Star is weekly active teams, you might sacrifice short-term monetization experiments. If it’s expansion revenue, you may deprioritize edge-case usability fixes for free users.

Without that forcing function, dashboards become negotiation tools. Everyone can find a number to defend their favorite project.

And this is where AI agents get interesting—and dangerous. An agent can surface correlations you didn’t see. It can detect churn patterns faster than your analyst. But it cannot decide which tradeoff matters.

That’s still human work.

Synthetic Users and the Comfort of Simulation

Another thread gaining traction: synthetic users in research.

On paper, the appeal is obvious. Faster feedback loops. Lower costs. No recruiting headaches. Simulate personas at scale and test flows before you ever schedule a session.

As someone who deeply values research, I’m not instinctively opposed. Simulation can be useful—especially in early concept validation.

But I’ve also sat in enough real user interviews to know something uncomfortable: the most important insights often live in what doesn’t scale.

A hesitation.
A contradiction.
A story that doesn’t fit your persona neatly.

Baymard Institute’s large-scale usability research has consistently shown that even well-established ecommerce sites have a 70% average cart abandonment rate, often driven by friction points teams assumed were minor. Many of those friction points were invisible in quantitative dashboards until qualitative work surfaced them.

Synthetic users can simulate behavior based on training data. But they don’t surprise you in the same way a human does when they say, “I didn’t trust that screen, so I opened a second tab and Googled you.”

What I worry about isn’t the tool. It’s the temptation.

When direction is fuzzy, simulation feels safer than confrontation. It’s easier to test five variations in a model than to sit with the possibility that your core value proposition is misaligned.

Again, intelligence amplifies what’s already there. If you’re clear on your strategy, synthetic research can accelerate learning. If you’re not, it can help you optimize the wrong thing faster.

Developer Productivity and the Measurement Trap

I’ve also been following debates about developer productivity experiment design. How do we measure it fairly? Commits? PR cycle time? Story points? DORA metrics?

Google’s 2023 DevOps Research and Assessment (DORA) report emphasizes four key metrics: deployment frequency, lead time for changes, change failure rate, and time to restore service. These are useful proxies. But even the DORA team is clear: context matters.

Here’s the pattern I’ve seen:

  1. Leadership wants more output.
  2. They add metrics to measure productivity.
  3. Engineers optimize to those metrics.
  4. The system adapts—but not necessarily in the way you intended.

Suddenly, smaller PRs are favored over meaningful architectural work. Risk avoidance creeps in. Teams ship more, but think less boldly.

The issue isn’t measurement. It’s alignment.

If the product strategy is clear—say, “We are becoming the most reliable tool in our category”—then productivity metrics can reinforce that. Faster recovery time and lower change failure rates matter deeply.

If strategy is unclear, productivity metrics drift into performative output.

The same pattern shows up across product work: when direction is strong, metrics and agents become accelerants. When direction is weak, they become distractions.

The Direction Stack: A Practical Framework

Over time, I’ve started using a simple mental model with teams. I call it the Direction Stack. Before adding more intelligence—agents, dashboards, simulations—you check the stack.

1. Identity: Who Are We For, Really?

Not in a slide deck. Not in a persona doc written two years ago.

  • If we had to fire 30% of our customers tomorrow, which ones would we protect?
  • Which user would we be proud to design around at the expense of others?

If this layer is fuzzy, everything above it wobbles.

2. Value: What Change Do We Create?

Be precise.

Are we saving time? Increasing revenue? Reducing anxiety? Enabling status?

If your product disappeared tomorrow, what specific pain would intensify? If you can’t answer that clearly, no metric will rescue you.

3. Leverage: What Drives That Value Most?

This is where your North Star lives.

It should reflect the core behavior that delivers value—for both user and business. Airbnb’s early focus on nights booked worked because it aligned marketplace health with user success. Slack’s early focus on messages sent reflected engagement and team integration.

Choose the lever that best represents the value you create.

4. Acceleration: How Do We Move Faster?

Only here do agents, dashboards, and synthetic users come into play.

Now ask:

  • Can an agent summarize feedback tied to our core user?
  • Can we simulate flows around our key value moment?
  • Can productivity metrics reinforce our strategic constraint?

When acceleration sits on top of identity, value, and leverage, it compounds clarity. When it sits alone, it multiplies noise.

The Human Weight of Direction

Here’s the part we don’t talk about enough: direction is emotionally expensive.

Choosing a North Star means disappointing someone.
Narrowing your ICP means walking away from revenue.
Prioritizing reliability over feature velocity means telling sales “not yet.”

It’s easier to say, “Let’s gather more data.”
Or, “Let’s have the agent analyze another month of feedback.”

I’ve been in those rooms. I’ve delayed hard calls myself.

But I’ve also seen what happens when a team finally commits.

At a Series B company I worked with, we spent three tense weeks debating whether we were a workflow tool for operators or a reporting tool for executives. The data supported both stories.

Eventually, the CEO said, quietly: “If we’re honest, the operators are the ones who fight to keep us. Executives just sign the checks.”

They chose operators.

Within two quarters:

  • The roadmap narrowed dramatically
  • Activation improved by 18%
  • Sales cycles shortened because positioning became clearer

The metrics didn’t magically improve because we measured better. They improved because we decided who we were for.

The AI tools they later added? They worked beautifully—because the direction was already set.

Intelligence Is a Multiplier, Not a Compass

I’m not skeptical of AI agents. I’m not anti-metrics. I’m not nostalgic for a simpler era.

We need better tools. The complexity of modern products demands them.

But I keep coming back to this:

A compass doesn’t get replaced by a faster engine.

Direction is still a leadership act. It’s a team commitment. It’s a strategic constraint you willingly accept.

Agents can summarize the map. Dashboards can highlight terrain. Synthetic users can simulate paths.

But someone has to choose where you’re going.

And that choice—clear, constrained, sometimes uncomfortable—is still the most human work in product.

The real opportunity in this moment isn’t to build smarter systems. It’s to build braver clarity.

Because when direction is strong, intelligence becomes power.

When it’s weak, intelligence becomes camouflage.

And our users can feel the difference.

Jordan Taylor
Jordan Taylor
Product Strategy Consultant

Jordan helps product teams navigate complexity and make better decisions. She's fascinated by how teams balance user needs, business goals, and technical constraints.

TOPICS

Product StrategyProduct ManagementAI in ProductMetricsUser Research

Ready to transform your feedback process?

Join product teams using Round Two to collect, analyze, and act on user feedback.

Why More Product Intelligence Isn’t Enough