Back to Blog

North Star Metric Design: The 3-Game Framework

Growth Strategy Akif Kartalci 13 min read
north star metricSaaS metricsgrowth strategyproduct-led growthstartup metricsKPIsdata-driven growth
North Star Metric Design: The 3-Game Framework

I’ve reviewed over 60 startup dashboards in the past three years. Want to know the most common pattern I see?

A messy spreadsheet with 30+ KPIs, half of them trending green, the company still struggling to grow. Everyone’s “tracking everything” and understanding nothing.

Then there’s the opposite extreme: a founder who proudly announces their North Star Metric is “revenue.” That’s like saying your North Star in a road trip is “arriving.” It tells you nothing about whether you’re on the right road, driving the right speed, or even heading in the right direction.

The truth is, most startups don’t have a metric problem. They have a clarity problem. They haven’t figured out what game they’re playing, so they can’t figure out what to measure.

Today, I’m going to share the framework we use at Momentum Nexus to help our clients design North Star Metrics that actually work. We call it the 3-Game Framework, and it’s transformed how a dozen companies think about their growth engines.

Why North Star Metrics Fail

Before we get into the framework, let’s talk about why the typical approach to North Star Metrics breaks down.

The standard advice goes like this: “Pick a single metric that best captures the core value your product delivers to customers.” Simple, right?

Except it’s not. Here’s what happens in practice:

Problem 1: The metric is too abstract. “Engagement” means nothing actionable. Your designer interprets it as time-on-page, your PM thinks it’s feature adoption, your CEO believes it’s DAU. Everyone’s optimizing for a different thing under the same label.

Problem 2: The metric is too narrow. “Weekly active users” ignores whether those users are actually getting value. You can inflate WAU with notification spam and dark patterns. The number goes up, retention goes down, and nobody notices until it’s too late.

Problem 3: The metric doesn’t connect to revenue. This is the one that kills startups. You can have beautiful product metrics that look incredible on a dashboard and have zero correlation with whether customers pay, expand, or renew. I’ve seen companies with “great engagement” churn at 8% monthly.

Problem 4: The metric doesn’t evolve. What matters at $0 ARR is fundamentally different from what matters at $1M or $10M. A static North Star leads to stale strategy.

The 3-Game Framework solves all four of these problems by starting with a simple question: What game is your business playing?

The 3-Game Framework

Every software business, at its core, is playing one of three games. The game you’re playing determines what value means to your customers, which in turn determines what you should measure.

Game 1: The Attention Game

You’re playing the Attention Game if your business monetizes user attention.

This includes ad-supported products, content platforms, social networks, media companies, and any business where the primary revenue model depends on how much time users spend in your product.

Examples: YouTube, Spotify (free tier), Medium, most news apps, social media tools.

The core value exchange: You provide content/entertainment/information, users give you their time, and you monetize that time through ads, sponsorships, or data.

North Star direction: Your metric should capture the quality and quantity of attention your product earns.

Common North Star Metrics for Attention Game companies:

  • Total time spent per user per week (not daily - daily is too volatile)
  • Content consumption sessions per user (how often they come back)
  • Share of voice/share of feed (for social platforms)

The key nuance: raw time spent can be gamed. A better approach is to measure chosen time - time users voluntarily spend, minus any dark-pattern-driven engagement. Netflix famously tracks “viewing hours” but weights for completion rate, because someone who finishes a series is getting more value than someone who clicks through 20 titles and leaves.

How to validate your Attention Game NSM: Correlate it with ad revenue per user and retention. If your NSM goes up but ad revenue per user stays flat, you’re measuring the wrong kind of attention.

Game 2: The Transaction Game

You’re playing the Transaction Game if your business makes money each time users complete a transaction.

This includes marketplaces, e-commerce platforms, fintech companies, and any business where revenue is directly tied to discrete user actions.

Examples: Stripe, Shopify, Airbnb, any B2B marketplace, payment processors.

The core value exchange: You enable transactions that would be harder, slower, or impossible without your platform. Each completed transaction = direct value delivered.

North Star direction: Your metric should capture the volume and velocity of valuable transactions.

Common North Star Metrics for Transaction Game companies:

  • Gross transaction volume (GTV) per active user
  • Transactions completed per week
  • Transaction success rate × volume (captures both quality and quantity)

The key nuance: total transaction volume alone can be misleading. A marketplace processing $10M/month in transactions sounds great until you realize it’s 5 whale accounts and the other 10,000 users did nothing. Per-user transaction metrics are almost always more revealing than aggregate ones.

Stripe’s famous metric - total payment volume processed - works because their business model creates near-perfect alignment between customer success and Stripe’s revenue. When a Stripe customer processes more payments, they’re growing, and Stripe earns more. That’s the gold standard for a Transaction Game NSM.

How to validate your Transaction Game NSM: Run the “remove test.” If your NSM doubled overnight, would your revenue roughly double too? If not, there’s a disconnect. For transaction businesses, this correlation should be nearly linear.

Game 3: The Productivity Game

You’re playing the Productivity Game if your business helps users accomplish tasks more efficiently.

This is where most B2B SaaS companies live. Project management tools, CRMs, analytics platforms, design tools, developer tools - anything where the value proposition is “do X faster/better/cheaper.”

Examples: Notion, Figma, HubSpot, Slack, most vertical SaaS.

The core value exchange: You save users time, reduce errors, or unlock capabilities they didn’t have before. They pay you a subscription because the ongoing value exceeds the ongoing cost.

North Star direction: Your metric should capture the breadth and depth of productive work happening on your platform.

Common North Star Metrics for Productivity Game companies:

  • Tasks/workflows completed per active user per week
  • Collaborative actions per team per week (for team-based tools)
  • Core jobs-to-be-done completion rate (for focused tools)

The key nuance: for Productivity Game companies, more time in app is often a bad sign. If your project management tool requires 3 hours a day to operate, you’re failing at productivity. The ideal is maximum output per unit of time spent.

Slack’s early North Star was “messages sent,” but they evolved it to “messages sent within 2,000-message teams” - because they discovered that teams under a certain engagement threshold didn’t retain, and the specific threshold was about 2,000 messages total. That’s a brilliant example of refining a Productivity Game metric to capture actual value delivery.

How to validate your Productivity Game NSM: Survey your power users. Ask: “What would break if you stopped using our product?” Map their answers to your NSM. If the things they’d miss most aren’t reflected in what you measure, you have a gap.

Identifying Your Game

Most companies play one primary game with elements of the others. The key is to identify which game accounts for 70%+ of your value delivery.

Here’s a quick diagnostic:

Ask yourself these three questions:

  1. If users spent 2x more time in your product, would that be good or bad?

    • Good → Likely Attention Game
    • Bad → Likely Productivity Game
    • Depends on what they’re doing → Likely Transaction Game
  2. How does your revenue scale?

    • With eyeballs/impressions → Attention Game
    • With completed actions/purchases → Transaction Game
    • With seats/usage tiers → Productivity Game
  3. What does your best customer look like?

    • Spends hours daily in your product → Attention Game
    • Processes high volume through your platform → Transaction Game
    • Gets more done in less time using your tool → Productivity Game

Important: Hybrid games exist. Shopify plays both the Transaction Game (payment processing) and the Productivity Game (store management tools). In these cases, pick the game that represents your primary growth lever and design your NSM around that.

From Game to Metric: The Design Process

Once you know your game, here’s the 4-step process we use to design the actual metric:

Step 1: Map the Value Moment

A value moment is the instant a user receives the core value your product promises.

For the Attention Game, it’s the moment of genuine engagement - a user laughing at a video, learning something from an article, getting inspired by a post.

For the Transaction Game, it’s the completed transaction - a payment processed, a booking confirmed, a deal closed.

For the Productivity Game, it’s the job done - a report generated, a design exported, a workflow automated.

Exercise: List your top 5 value moments. Be specific. “User creates a report” is too vague. “User creates a client-facing report with data from 3+ integrations and exports it as PDF” is a value moment.

Step 2: Find the Leading Indicator

Your North Star Metric shouldn’t be a lagging indicator like revenue or churn. It should be a leading indicator - something that predicts future revenue and retention.

The test: If this metric moved today, would revenue follow within 30-90 days?

We’ve found that the best NSMs sit about 30-60 days ahead of revenue. Close enough to be actionable, far enough ahead to be predictive.

Here’s how that looks by game:

GameLagging (avoid)Leading (target)
AttentionAd revenueWeekly engaged time per user
TransactionGMVTransactions per active user per week
ProductivityMRRCore workflows completed per team per week

Step 3: Add the Quality Gate

Raw volume metrics can be gamed. The quality gate prevents optimization at the expense of real value.

Attention Game quality gate: Engagement must be voluntary. Filter out push-notification-driven sessions under 10 seconds. Measure “chosen engagement” - sessions initiated by the user that last more than a meaningful threshold.

Transaction Game quality gate: Transactions must be completed successfully. A failed payment that gets retried three times isn’t three transactions - it’s one frustrating experience. Measure successful transactions with a satisfaction or completion quality score.

Productivity Game quality gate: Work must be valuable. A user who creates 50 empty tasks in a project management tool isn’t being productive. Add a “meaningful work” filter - tasks with descriptions, due dates, or assignees, for instance.

Step 4: Set the Cadence

How often you measure matters. Too frequent and you’re chasing noise. Too infrequent and you’re flying blind.

Our recommendations:

  • Attention Game: Daily monitoring, weekly reporting, monthly strategy reviews
  • Transaction Game: Real-time monitoring, daily reporting, weekly strategy reviews
  • Productivity Game: Weekly monitoring, bi-weekly reporting, monthly strategy reviews

The cadence should match the natural rhythm of your users. If your customers use your product daily, measure weekly. If they use it weekly, measure monthly. Always one step above usage frequency.

Real-World Application: Three Case Studies

Case Study 1: B2B Analytics Platform (Productivity Game)

A client came to us with “Monthly Active Users” as their North Star. They had 8,000 MAU and were celebrating growth. But churn was 6% monthly and NRR was 85%.

We identified their game: Productivity. Users came to generate reports and extract insights. Time in app was irrelevant - a user who got their answer in 2 minutes was happier than one who spent 45 minutes digging.

New NSM: “Weekly reports generated with data from 2+ sources per active team.”

Why this worked:

  • “Weekly” matched their usage cadence
  • “Reports generated” captured the core value moment
  • “With data from 2+ sources” was the quality gate (single-source reports indicated shallow usage)
  • “Per active team” made it about collaboration, which correlated with retention

Within 6 months, teams scoring above the NSM benchmark retained at 97%. Teams below it churned at 12% monthly. The metric gave them a clear predictor and a clear intervention point.

Case Study 2: B2B Marketplace (Transaction Game)

A marketplace client was tracking “total listings” as their NSM. Sellers were listing products, but transactions weren’t happening. The marketplace had a supply-demand imbalance they couldn’t see because their metric only measured one side.

New NSM: “Successful transactions per active buyer per month.”

The shift to buyer-side metrics revealed that 60% of buyers searched, found nothing relevant, and left. The listing count was vanity - what mattered was whether buyers could actually complete purchases.

This led to a complete strategy pivot: instead of acquiring more sellers (which would inflate the old metric), they focused on curating existing inventory and improving search matching. Transaction rate per buyer tripled in four months.

Case Study 3: Content Platform (Attention Game)

A content platform was measuring “daily active users” and saw healthy numbers. But ad revenue was flat because users were opening the app, scanning headlines, and leaving within 90 seconds.

New NSM: “Articles read to 75% completion per user per week.”

The quality gate (75% completion) filtered out casual scrollers. The weekly cadence smoothed out daily volatility. And the per-user normalization prevented them from conflating growth with engagement.

The result: they shifted content strategy from clickbait headlines (which drove DAU) to in-depth articles (which drove completion). DAU initially dipped 15%, but ad revenue per user increased 40% because engaged readers saw more ads and had higher click-through rates.

The NSM Stack: Metric, Inputs, and Health Checks

A North Star Metric doesn’t work in isolation. You need what we call the NSM Stack:

Layer 1: The North Star Metric One metric. One number. The thing that goes on the wall and gets reviewed every week.

Layer 2: Input Metrics (3-5 max) These are the levers you can pull to move the NSM. They should be:

  • Directly controllable by specific teams
  • Independently movable (changing one shouldn’t automatically change another)
  • Collectively exhaustive (if all inputs improve, the NSM must improve)

Example for a Productivity Game NSM of “workflows completed per team per week”:

  • Activation rate (% of new users who complete first workflow in 7 days)
  • Feature adoption breadth (average integrations connected per team)
  • Weekly retention (% of teams active this week who were active last week)
  • Workflow complexity (average steps per workflow - more steps = deeper usage)

Layer 3: Health Check Metrics (guardrails) These ensure you’re not gaming the NSM at the expense of business fundamentals:

  • Revenue per user (ensures NSM growth translates to business value)
  • Customer satisfaction score (ensures product quality isn’t sacrificed)
  • Support ticket volume per user (early warning for UX problems)

The rule: If Health Check Metrics degrade while the NSM improves, something is wrong. Pause and investigate before continuing to optimize.

Common Mistakes (And How to Avoid Them)

After helping dozens of companies design their NSMs, I’ve catalogued the most common failure modes:

Mistake 1: Picking a Metric Your Team Can’t Influence

If your engineering team can’t ship features that move the NSM, it’s too abstract. Every team should be able to draw a clear line from their work to the metric.

Fix: Run the “so what?” test. For each team, ask: “If this team does amazing work for the next quarter, how specifically does that move the NSM?” If they can’t answer clearly, the metric needs to be more concrete.

Mistake 2: Changing Your NSM Every Quarter

Strategy takes time. If you’re switching your North Star every 3 months, you’re not iterating - you’re thrashing. You lose the longitudinal data that makes the metric valuable.

Fix: Commit to your NSM for at least 6 months. You can adjust the definition (tighten quality gates, change cadence), but the core metric should be stable.

Mistake 3: Making Revenue Your North Star

Revenue is the outcome, not the driver. It’s like a doctor using “patient survival” as their diagnostic metric. Yes, it’s the ultimate goal, but it doesn’t tell you what treatment to prescribe.

Fix: Use revenue as a Health Check Metric. Your NSM should predict revenue, not be revenue.

Mistake 4: Ignoring Segments

A single NSM can mask segment-level problems. Your overall metric might be healthy while enterprise customers are churning and SMB customers are booming. Or vice versa.

Fix: Always slice your NSM by key segments - plan tier, company size, industry, acquisition channel. The aggregate number goes on the wall; the segmented views go in the operating review.

Mistake 5: Not Connecting NSM to Individual Goals

If your NSM lives on a company dashboard but doesn’t show up in anyone’s individual objectives, it’s a decoration, not a metric.

Fix: Every team lead should have at least one objective directly tied to an NSM Input Metric. Make it part of the operating rhythm: weekly standups start with “Here’s how our work moved the NSM this week.”

Building Your NSM Dashboard

Once you’ve designed your metric, you need to make it visible. Here’s the dashboard structure we recommend:

Section 1: The Big Number Current NSM value, trend (up/down/flat vs. last week), and distance to quarterly target. This should be visible from across the room.

Section 2: Input Metric Breakdown Each Input Metric with its current value, trend, and owner. Color-coded: green (on track), yellow (needs attention), red (intervention required).

Section 3: Health Check Strip A simple row of Health Check Metrics with traffic-light indicators. If everything’s green, ignore it. If something’s yellow or red, investigate.

Section 4: Segment View Your NSM broken down by 2-3 key segments. Look for divergence - if one segment is pulling the aggregate up while another tanks, you have a problem to address.

Tools we recommend: Amplitude or Mixpanel for product analytics, paired with Notion or Google Sheets for the operating layer. Don’t over-engineer it. A spreadsheet that gets updated weekly beats a fancy dashboard nobody checks.

When to Evolve Your NSM

Your North Star Metric should be stable, but not permanent. Here are the three triggers for evolution:

Trigger 1: Stage Change When you cross a major growth threshold ($1M ARR, $10M ARR, etc.), your constraints change. A pre-PMF company should obsess over activation and value delivery. A post-PMF company should shift toward expansion and efficiency.

Trigger 2: Strategy Shift If you pivot your business model, add a major new product line, or fundamentally change your go-to-market, your NSM needs to reflect that.

Trigger 3: Metric-Revenue Decoupling If your NSM has been improving for 2+ quarters but revenue hasn’t followed, the correlation has broken. Time to redesign.

When you evolve your NSM, don’t throw away the old one. Move it to a Health Check Metric and layer the new NSM on top. This preserves continuity while shifting focus.

The Implementation Playbook

Here’s how to go from this article to a working NSM in your company:

Week 1: Identify Your Game Get your leadership team in a room. Run the diagnostic questions. Debate until you have consensus on which game you’re primarily playing. Document the reasoning.

Week 2: Map Value Moments and Design the Metric Follow the 4-step process. Draft 3 candidate NSMs and evaluate each against the validation criteria for your game type.

Week 3: Data Validation Pull historical data. Can you actually measure the candidate NSMs? Do they correlate with retention and revenue? Kill any candidate that fails the correlation test.

Week 4: Launch and Communicate Pick the winner. Build the dashboard. Present it to the entire company. Explain why this metric, why now, and how each team connects to it.

Ongoing: Weekly Review Cadence Every Monday, review the NSM and Input Metrics. Every month, review Health Checks and segment breakdowns. Every quarter, ask: “Is this still the right metric?”

Final Thoughts

The North Star Metric isn’t a silver bullet. It won’t fix a broken product, a misaligned team, or a market that doesn’t exist.

What it will do is give your entire organization a shared definition of success. When the designer, the engineer, the marketer, and the CEO all know what “winning” looks like, the quality of decisions improves at every level.

The 3-Game Framework simply makes the design process less arbitrary. Instead of picking a metric because it “feels right” or because some blog post said so, you’re starting from first principles: What game are we playing? What does value mean in that game? How do we measure value delivery?

Get those answers right, and the metric practically designs itself.

If you’re struggling to identify your game or design your metric, reach out to us. This is exactly the kind of strategic work we do with early-stage and growth-stage companies at Momentum Nexus, and the clarity it creates tends to ripple through every other growth decision you make.

Ready to Scale Your Startup?

Let's discuss how we can help you implement these strategies and achieve your growth goals.

Schedule a Call