Lead and Lag Indicators: Stop Measuring What Already Happened
Lead and lag indicators are two types of performance metrics that measure different points in time. Lag indicators measure outcomes that have already occurred, such as revenue, profit, or customer churn. Lead indicators measure the activities and signals that predict those outcomes before they happen, such as pipeline volume, engagement rates, or proposal conversion ratios.
Most marketing teams track lag indicators almost exclusively. They look at last month’s numbers, declare success or failure, and move on. The problem is that by the time a lag metric surfaces, the opportunity to change the result has already passed.
Key Takeaways
- Lag indicators tell you what happened. Lead indicators give you time to do something about it. You need both, but most teams over-index on lag.
- A metric is only a lead indicator if it has a demonstrable, directional relationship with the outcome you care about. Correlation is not enough.
- The most common mistake is treating activity metrics as lead indicators. Sending 500 emails is activity. The reply rate from qualified prospects is a lead indicator.
- Lead indicators lose their predictive value if teams start optimising the metric rather than the underlying behaviour it was designed to represent.
- Effective go-to-market measurement requires a small set of high-confidence lead indicators, not a dashboard full of proxies that nobody trusts.
In This Article
- Why the Distinction Matters More Than Most Teams Realise
- What Counts as a Lag Indicator?
- What Counts as a Lead Indicator?
- How to Choose Lead Indicators That Actually Predict Outcomes
- The Goodhart’s Law Problem With Lead Indicators
- Lead and Lag Indicators in a Go-To-Market Context
- Common Mistakes Teams Make When Implementing Lead Indicators
- How to Build a Lead and Lag Indicator Framework for Your Team
Why the Distinction Matters More Than Most Teams Realise
When I was running an agency that had swung from significant loss to profit over about eighteen months, the thing that changed first was not the revenue line. It was a handful of upstream signals: the quality of briefs we were winning, the margin profile of new work coming in, and the conversion rate from chemistry meeting to retained client. Those were the numbers I watched every week. The revenue followed, but it followed those signals by six to eight weeks. If I had waited for the P&L to confirm what was working, I would have been making decisions on stale information every single time.
That is the practical value of lead indicators. They compress the feedback loop. In a business where monthly reporting is the norm, a well-chosen lead indicator can give you four to six decision points inside a single quarter instead of one.
If you are building out your go-to-market measurement framework, this sits at the centre of how the whole system should be structured. The broader thinking on go-to-market and growth strategy covers how measurement connects to execution across the full commercial model.
What Counts as a Lag Indicator?
Lag indicators are outcome metrics. They confirm what has already happened and are typically the numbers that appear in board reports, investor updates, and end-of-quarter reviews.
Common examples include:
- Revenue and gross profit
- Customer acquisition cost
- Market share
- Net Promoter Score
- Customer churn rate
- Return on ad spend
None of these are bad metrics. They are essential. The problem is that they are retrospective by design. When your churn rate rises in Q3, the underlying cause, whether it was a product issue, a service failure, or a competitor move, happened weeks or months earlier. The lag metric is the evidence, not the warning.
Lag indicators are also, generally speaking, harder to influence directly. You cannot instruct your team to increase revenue. You can instruct them to increase the number of qualified demos booked, or to improve the win rate on proposals above a certain contract value. Those are lead indicators. Revenue is what happens when you get enough of them right.
What Counts as a Lead Indicator?
A lead indicator is a metric that has a predictive relationship with a future outcome. The word predictive is doing a lot of work in that sentence, and it is worth being precise about what it means.
A lead indicator is not simply an early metric. It has to actually predict something. You need evidence, even informal evidence based on pattern recognition, that when this number moves in a particular direction, a specific outcome tends to follow. Without that relationship, you are just measuring an earlier point in the process, not a predictor of the result.
Common examples include:
- Number of qualified opportunities entering the pipeline
- Proposal-to-close conversion rate
- Time to second meeting from first contact
- Product usage frequency among new customers in the first 30 days
- Share of wallet among existing accounts
- Content engagement from target accounts
The test for any candidate lead indicator is simple: if this number improves, do we have good reason to believe the outcome we care about will also improve, and within a knowable timeframe? If the answer is yes, and you can point to why, it is a lead indicator. If the answer is “probably, maybe, it feels right,” you are guessing.
I have sat in enough planning sessions to know that most teams conflate activity metrics with lead indicators. Sending a thousand outreach emails is activity. The positive reply rate from decision-makers in your target segment is a lead indicator. Both are measurable. Only one predicts anything.
How to Choose Lead Indicators That Actually Predict Outcomes
Choosing the right lead indicators is harder than it looks, and most frameworks underestimate the difficulty. There are three questions worth working through for every candidate metric.
Does it have a directional relationship with the outcome? This means that when the lead indicator goes up, the outcome tends to go up too, or down, depending on what you are measuring. Churn risk indicators, for example, should move in the opposite direction to retention. If you cannot articulate the direction of the relationship, the metric is probably decorative.
Is there a lag between the indicator and the outcome? A true lead indicator gives you advance warning. If the gap between the indicator moving and the outcome arriving is only a few days, the practical value is limited. The most useful lead indicators give you weeks or months to respond.
Can you influence it? A lead indicator that your team cannot move is an interesting observation, not a management tool. The point of tracking lead indicators is to create actionable decision points. If the metric is entirely outside your control, it belongs in your environmental monitoring, not your performance dashboard.
BCG’s work on scaling agile operating models touches on a related principle: the teams that move fastest are the ones with the shortest feedback loops between action and signal. Lead indicators are how you shorten that loop without waiting for outcomes to confirm what you already suspected.
The Goodhart’s Law Problem With Lead Indicators
There is a well-known principle in economics and management theory that when a measure becomes a target, it ceases to be a good measure. It applies to lead indicators with particular force.
I have seen this happen inside agencies more than once. A team agrees that number of qualified discovery calls is a strong lead indicator for pipeline health. Within two quarters, the definition of “qualified” has quietly expanded to include calls that would previously have been screened out. The metric looks healthy. The pipeline is not.
The lead indicator has been gamed, not deliberately in most cases, but because the pressure to hit a number creates unconscious drift in how the number is defined and counted. The predictive relationship breaks down precisely because the metric has become a target.
The practical response to this is twofold. First, keep the set of lead indicators small. The more metrics you track, the more surface area there is for this kind of drift. Three or four high-confidence lead indicators per function is usually enough. Second, revisit the definitions regularly. Not to change them arbitrarily, but to confirm that what is being counted today is the same thing that was being counted when you established the relationship with the outcome.
Lead and Lag Indicators in a Go-To-Market Context
Go-to-market strategy is where the lead and lag distinction becomes most commercially important. A GTM motion involves multiple functions, multiple timeframes, and multiple points where the outcome can be influenced or lost. Without lead indicators at each stage, you are essentially flying blind until the revenue number arrives.
A basic GTM measurement stack might look something like this:
Top of funnel: Lag indicator is marketing-qualified leads generated. Lead indicators include target account engagement rate, content consumption depth among ICP-matched visitors, and inbound request volume from priority segments.
Mid funnel: Lag indicator is sales-qualified opportunities created. Lead indicators include discovery call conversion rate, average deal size of active opportunities, and time from first contact to proposal.
Close: Lag indicator is revenue won. Lead indicators include proposal win rate, competitive displacement rate, and average sales cycle length against target.
Retention and expansion: Lag indicator is net revenue retention. Lead indicators include product usage frequency in the first 90 days, support ticket volume per account, and engagement with expansion-oriented communications.
Vidyard’s research into pipeline and revenue potential for GTM teams points to a consistent gap between the pipeline signals teams track and the revenue outcomes they actually achieve. Part of that gap is measurement: teams are watching the wrong things, or watching the right things too late.
Forrester’s intelligent growth model makes a similar argument from the demand side: sustainable growth requires understanding the signals that precede demand, not just the demand itself. That is, in essence, the case for lead indicators applied to market strategy.
Common Mistakes Teams Make When Implementing Lead Indicators
The first and most common mistake is building a dashboard of activity metrics and calling them lead indicators. Emails sent, posts published, ads served: these are inputs, not predictors. They tell you what your team did. They do not tell you whether what your team did will produce the outcome you need.
The second mistake is choosing lead indicators based on what is easy to measure rather than what is actually predictive. This is understandable. Data that is clean, consistent, and already sitting in your CRM is attractive. But convenience is not the same as relevance. If the easiest metric to track has no demonstrable relationship with the outcome you care about, it is not a lead indicator. It is a comfort blanket.
The third mistake is tracking too many. I have reviewed marketing dashboards with forty or fifty metrics presented as equally important. Nobody is making decisions from forty metrics. What actually happens is that teams default to the two or three numbers they understand intuitively, and the rest become wallpaper. Better to have four metrics that everyone understands and acts on than forty that nobody trusts.
The fourth mistake is failing to validate the predictive relationship over time. A lead indicator that was accurate in one market environment may not remain accurate as conditions change. Competitive dynamics shift. Buyer behaviour changes. The relationship between an upstream signal and a downstream outcome can weaken without anyone noticing, particularly if the team is not regularly checking whether the prediction is holding.
Growth-focused teams often fall into these traps when they are scaling fast. The growth hacking examples that get written up tend to focus on the wins and skip over the measurement discipline that made those wins repeatable. The underlying discipline is usually a small set of well-chosen lead indicators that the team watches obsessively.
How to Build a Lead and Lag Indicator Framework for Your Team
Start with the outcome. Pick one lag indicator that matters most to your business right now. Revenue, retention, market share, whatever is the primary commercial objective for the next twelve months. Everything else flows from there.
Work backwards from that outcome. Ask: what has to be true three months before we hit that number? What has to be true six months before? What signals, if they are moving in the right direction today, would give us confidence that we are on track?
For each candidate signal, apply the three tests from earlier: directional relationship, meaningful lag, and actionability. Keep only the ones that pass all three. You will probably end up with three to five strong lead indicators per major outcome. That is the right number.
Assign ownership. Each lead indicator should have a named person responsible for it. Not a team, a person. The moment a metric is owned by a function rather than an individual, accountability diffuses and the metric stops being managed actively.
Set a review cadence. Lead indicators should be reviewed more frequently than lag indicators. If your lag indicator is reviewed monthly, your lead indicators should be reviewed weekly. The whole point is to create earlier decision points. A weekly review rhythm forces the team to engage with the signal while there is still time to act on it.
Finally, document the expected relationship. Write down, in plain language, what you expect to happen to the lag indicator if the lead indicator moves by a given amount over a given period. This is not a precise forecast. It is a hypothesis. The discipline of writing it down forces clarity, and it gives you something to test against as the data comes in.
BCG’s analysis of go-to-market pricing strategy is a useful reference point here, not because it is specifically about lead indicators, but because it illustrates how upstream commercial decisions, pricing structure, market segmentation, channel mix, create downstream outcomes that are very hard to reverse once they have materialised. The logic applies directly: the time to influence the outcome is before the lag indicator confirms it.
When I grew an agency from around twenty people to over a hundred, the metrics that mattered most were never the ones on the monthly P&L. They were the ones that told us, six to eight weeks in advance, whether the decisions we were making were working. Pitch win rate by service line. Repeat brief rate from existing clients. Average revenue per head against target. None of those appeared in the board pack. All of them predicted what would appear in the board pack.
The discipline of identifying and tracking those upstream signals is not complicated. But it requires a willingness to resist the pull of the obvious metrics, the ones that are easy to report and easy to understand, in favour of the ones that actually tell you something useful while you can still do something about it.
If this kind of measurement thinking is part of a broader GTM review, the full go-to-market and growth strategy hub covers the commercial architecture that makes individual metrics meaningful, from market entry decisions through to how you structure teams around growth objectives.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
