Engagements vs Impressions: Which Metric Moves Revenue
Engagements and impressions measure different things, and confusing the two is one of the most common ways marketing teams end up optimising for activity rather than outcomes. Impressions count how many times your content was served. Engagements count how many times someone chose to interact with it. Neither number tells you whether you sold anything, but one is considerably more useful as a signal of attention than the other.
The practical question is not which metric is better in the abstract. It is which one you should be reporting on, and why, given what you are actually trying to achieve.
Key Takeaways
- Impressions measure reach; engagements measure response. Both have a role, but they answer different strategic questions.
- High impression volume with low engagement is often a sign of audience mismatch, not a creative problem.
- Engagement rate matters more than raw engagement counts, especially when comparing campaigns of different scale.
- Neither metric is a proxy for revenue. Treating them as one is where reporting starts to mislead decision-making.
- The most honest use of both metrics is directional, not definitive. They point you toward questions, not answers.
In This Article
- What Impressions Actually Tell You
- What Engagements Actually Tell You
- Engagement Rate: The Number That Does More Work
- Where Both Metrics Fit in a Go-To-Market Strategy
- The Vanity Metric Problem Is a Reporting Problem
- How to Use Both Metrics Without Misleading Yourself
- The Platform Incentive Problem
- Choosing the Right Metric for the Right Question
I have sat in enough marketing reviews to know how this usually plays out. Someone presents a slide showing 4.2 million impressions and calls it a successful campaign. No one asks what those impressions cost, whether the audience was right, or what happened downstream. The number is large, so the room nods. This is how vanity metrics survive in otherwise commercially serious businesses.
What Impressions Actually Tell You
An impression is recorded when your content is delivered to a screen. On most platforms, that means it appeared in a feed or ad placement. It does not mean anyone looked at it, processed it, or remembered it. The impression is a distribution event, not a communication event.
That distinction matters more than most reporting treats it. Impressions are useful for understanding reach, frequency, and the scale of your distribution. If you are running a brand awareness campaign and trying to understand how many unique people were exposed to your message, impression data gives you a starting point. It is an input metric, not an outcome metric.
Where impressions become misleading is when they are used as evidence of impact. Serving an ad to someone who scrolled past it in 0.3 seconds is not the same as communicating with them. Platforms count the impression either way. This is not a flaw in the data exactly, it is just a flaw in how the data gets interpreted in the boardroom.
There is also the question of what counts as an impression across different platforms. The definitions are not consistent. What Facebook counts, what LinkedIn counts, and what a programmatic DSP counts are all slightly different things. When you are comparing impression performance across channels, you are often comparing apples to something that is only vaguely apple-shaped.
What Engagements Actually Tell You
Engagements are a broader category than most people treat them as. Depending on the platform and the context, engagements can include likes, comments, shares, saves, clicks, video views above a threshold, link taps, and profile visits. They are not a single thing. They are a collection of signals that all indicate some form of chosen interaction.
That word, chosen, is what makes engagements more valuable as a signal than impressions. When someone engages with your content, they made a decision. They did not just scroll past. That decision tells you something about relevance, resonance, and whether your message landed with the right people in the right way.
But engagements are not created equal either. A share is a fundamentally different signal from a like. A comment that says “this is exactly what I needed” is a different signal from a comment that says “stop showing me this.” A click through to your product page is a different signal from a click to expand an image. Aggregating all of these into a single engagement count and reporting it as one number loses most of the useful information.
When I was managing large social campaigns across multiple retail clients, we started breaking engagements into tiers: passive interactions like likes and reactions, active interactions like shares and saves, and intent signals like link clicks and profile visits. The passive numbers were always the biggest. The intent signals were always the smallest. The passive numbers were also almost entirely decorrelated from sales. The intent signals were not. Reporting one aggregate engagement figure had been obscuring that difference for months.
Engagement Rate: The Number That Does More Work
If you are going to use engagement as a meaningful metric, the rate matters more than the count. Engagement rate is typically calculated as total engagements divided by impressions or reach, expressed as a percentage. It normalises for scale, which makes it useful for comparing campaigns, content types, and audiences in a way that raw engagement counts cannot.
A post that gets 500 engagements from 5,000 impressions is performing very differently from a post that gets 500 engagements from 500,000 impressions. The count is identical. The rate tells you the second post is doing something wrong, or reaching the wrong audience, or both.
Benchmarks for engagement rate vary significantly by platform, content type, and audience size. Larger accounts tend to have lower engagement rates as a structural feature of how social algorithms work, not as a sign of declining quality. Comparing your engagement rate to a competitor with a very different follower count is not a useful exercise. The more productive comparison is your own performance over time, and across content formats within the same channel.
This is where a lot of social reporting goes wrong. Teams spend time benchmarking against industry averages that are either outdated, poorly defined, or drawn from a sample that has nothing to do with their audience. The most useful benchmark is your own historical data, because at least that controls for your specific audience, platform mix, and content strategy.
Where Both Metrics Fit in a Go-To-Market Strategy
Neither impressions nor engagements exist in isolation from the commercial objective you are trying to achieve. The role they play depends entirely on where you are in the funnel and what you are asking the channel to do.
At the top of the funnel, impressions are a reasonable proxy for reach. If you are launching into a new market or introducing a new product, you need people to have seen you before they can consider you. Impression volume, qualified by audience targeting, tells you whether your distribution is working. This is the part of the funnel where reach matters more than response, because you are building the base of awareness that everything else depends on.
Earlier in my career I was heavily biased toward lower-funnel performance metrics. Conversion rate, cost per acquisition, return on ad spend. Those numbers felt real because they were close to revenue. What I came to understand over time is that much of what lower-funnel performance gets credit for was going to happen anyway. People who are already searching for your product, already comparing options, already close to a decision, those people are not being created by your retargeting ads. They are being captured by them. The demand existed before the ad ran. Growth requires reaching people who are not yet in market, and that is a job that impressions and engagement, used honestly, are actually suited for.
Mid-funnel, engagements become more relevant because you are trying to understand whether your content is building consideration. Are people saving it, sharing it, clicking through? Those signals suggest your message is resonating with an audience that is moving toward a decision. Engagement rate here is a useful indicator of message-market fit.
At the bottom of the funnel, neither impressions nor engagements should be your primary metric. Click-through rate, conversion rate, and revenue attribution are doing more of the work there. If you are reporting impressions and engagements as success metrics on a direct response campaign, something has gone wrong in how the brief was written.
If you want a clearer framework for how these metrics connect to broader commercial strategy, the articles on Go-To-Market and Growth Strategy at The Marketing Juice work through how measurement choices flow from strategic intent, not the other way around.
The Vanity Metric Problem Is a Reporting Problem
Impressions and engagements are not inherently vanity metrics. They become vanity metrics when they are reported without context, without connection to a business objective, and without any honest accounting of what they do and do not tell you.
I have judged at the Effie Awards, which is one of the few places in the industry where effectiveness is taken seriously as a criterion. The work that wins is not the work with the biggest impression numbers. It is the work where the team can demonstrate a clear line between what they did and what changed in the market. That line is rarely drawn through impressions alone. It runs through a combination of reach, resonance, and response, all measured honestly against a commercial objective that was defined before the campaign ran, not after.
The reason vanity metrics persist is not that marketers are naive. It is that reporting cycles create pressure to show something positive, and impression volume is almost always positive. It is very hard to run a campaign and get zero impressions. It is considerably harder to run a campaign and demonstrate that it changed purchasing behaviour. So teams default to the number that is easiest to defend, and over time the easy number becomes the standard.
This is something Vidyard has written about in the context of go-to-market difficulty, noting that the problem is often not the strategy itself but the measurement frameworks teams use to evaluate whether it is working. When the metrics do not connect to outcomes, it becomes very hard to course-correct.
How to Use Both Metrics Without Misleading Yourself
The practical answer is to use impressions and engagements as diagnostic tools rather than success metrics. They are useful for understanding what is happening in the distribution and attention layer of your marketing. They are not useful as evidence that your marketing is working commercially.
A few principles that have served me well across different clients and categories:
First, always report engagement rate alongside raw engagement counts. The count tells you scale. The rate tells you quality. You need both to understand performance.
Second, segment your engagements by type. Passive interactions and active interactions are different signals. Treat them differently in your reporting and your decision-making.
Third, define what a good impression looks like before the campaign runs. Impressions delivered to the wrong audience are not a success, even if the volume is high. Audience quality matters as much as reach volume, and your targeting criteria should be part of how you evaluate impression performance.
Fourth, connect both metrics to something downstream. Impressions should correlate with brand search volume or aided awareness over time. Engagements should correlate with traffic, leads, or pipeline movement. If neither metric connects to anything downstream, that is worth investigating rather than ignoring.
Fifth, be honest about what you cannot measure. Attribution is imperfect. The relationship between social engagement and purchase intent is real but indirect. Pretending you have more precision than you do is worse than acknowledging the uncertainty, because false precision leads to bad decisions. Semrush’s analysis of growth examples consistently shows that the teams who scale effectively are the ones who are clear-eyed about which metrics are leading indicators and which are lagging ones.
The Platform Incentive Problem
It is worth being direct about something that does not get said enough in these conversations. Platforms have a financial incentive to make their metrics look as good as possible. Impressions are counted in ways that maximise the number. Engagement definitions are sometimes broadened to include interactions that do not meaningfully indicate interest. Reach figures can include people who were technically served your content in circumstances where they were extremely unlikely to process it.
This does not mean the data is useless. It means you should treat it as a perspective on reality rather than reality itself. When I was running agency P&Ls and responsible for how we reported to clients, one of the disciplines I tried to build into the team was a habit of asking what the platform has an incentive to show us, before deciding how much weight to give any particular metric. That question changes how you read the data.
Platforms that sell advertising want you to believe your advertising is working. That is not cynicism, it is just how the commercial relationship works. Your job is to apply enough critical thinking to distinguish the signal from the noise in what they are showing you. Forrester’s research on go-to-market measurement challenges makes a similar point about the gap between vendor-reported metrics and independently verified outcomes across different sectors.
Choosing the Right Metric for the Right Question
The framing of engagements versus impressions as a competition misses the point. They are not competing metrics. They are metrics that answer different questions. The mistake is not using one over the other. The mistake is using either one to answer a question it was not designed to answer.
Use impressions to answer: How many people in my target audience were exposed to this message? Is my distribution working at the scale I need? Am I reaching new audiences or repeatedly reaching the same ones?
Use engagements to answer: Is this content resonating with the people who see it? Which formats and messages are generating active interest? Is my engagement rate improving or declining over time, and what does that suggest about content quality or audience fit?
Use neither to answer: Is this campaign driving revenue? Is our marketing working commercially? Should we increase or decrease budget?
Those last questions require different data entirely. They require pipeline metrics, conversion data, revenue attribution, and an honest assessment of how your marketing activity connects to commercial outcomes. Impressions and engagements can sit alongside that data as supporting context. They should not be doing the heavy lifting on their own.
The BCG work on go-to-market strategy in financial services illustrates this well: the teams that outperform are consistently the ones who have built measurement frameworks that connect channel activity to commercial outcomes, rather than reporting channel activity as if it were the outcome itself.
There is also a useful parallel in how Hotjar approaches growth loop measurement, treating behavioural signals as directional indicators that inform decisions rather than definitive scores that determine them. The same logic applies here: impressions and engagements are inputs to a judgment, not the judgment itself.
The broader point is that metric selection is a strategic decision, not a reporting one. The metrics you choose to track and present signal what you believe marketing is for. If you consistently lead with impressions, you are implicitly arguing that reach is the primary job of marketing. If you lead with engagements, you are arguing for resonance. Both are partial. The teams that get this right are the ones who have been deliberate about what they are trying to achieve before they decide how to measure it. That is the conversation worth having, and it starts long before the campaign brief is written.
More thinking on how measurement frameworks connect to growth strategy is available in the Go-To-Market and Growth Strategy hub, which covers how commercial objectives should shape the way you build and evaluate marketing programmes.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
