Attention Metrics: Why Viewability Was Never Enough
Attention metrics measure whether an ad was actually seen and processed by a human, not just whether it was technically served. That distinction matters more than most media plans acknowledge. Viewability told us an ad had the opportunity to be seen. Attention tells us whether anything happened as a result.
The gap between those two things is where a lot of media budget quietly disappears.
Key Takeaways
- Viewability confirms an ad was rendered on screen. It says nothing about whether anyone looked at it, for how long, or with any cognitive engagement.
- Attention metrics, including active dwell time, eye-tracking proxies, and interaction signals, sit closer to actual human behaviour than any prior media metric.
- Click-through rate remains the most misleading metric in digital media. High CTR on low-attention placements often signals misclicks, not interest.
- Reach and frequency are planning tools, not proof of impact. Frequency without attention is just repetition of waste.
- No single metric captures media effectiveness. Attention is a better leading indicator than viewability, but it still needs to be read alongside brand and sales outcomes.
In This Article
- Why the Media Industry Kept Measuring the Wrong Things
- What Attention Metrics Actually Measure
- How Attention Compares to the Metrics Most Teams Are Using
- Where Attention Metrics Are Genuinely Useful
- The Honest Limitations of Attention Measurement
- How to Use Attention Thinking Without Buying an Attention Platform
- What a Better Media Metric Stack Looks Like
Why the Media Industry Kept Measuring the Wrong Things
I spent several years managing significant programmatic budgets across retail, financial services, and FMCG. One thing that struck me early was how much energy went into optimising metrics that had almost no relationship to business outcomes. We would celebrate a viewability rate of 72% as if it proved the campaign worked. It proved nothing of the sort. It proved that pixels appeared on a screen, in a browser tab that may or may not have been in front of a human being who may or may not have been paying attention.
The industry defaulted to viewability because it was measurable, standardised, and easy to report. That is a pattern I have seen repeat itself across every era of digital marketing. We measure what is easy to measure, then gradually convince ourselves that what we are measuring is what matters.
Viewability became the floor, not the ceiling, of quality. The Media Rating Council definition, that 50% of pixels must be in-view for one second for display and two seconds for video, was always a compromise. It was the minimum threshold the industry could agree on, not a meaningful signal of advertising effectiveness. Yet for years, many media plans were optimised toward it as though hitting that threshold was the goal.
For a broader look at how analytics frameworks shape the questions we ask, the Marketing Analytics hub covers measurement thinking across channels and tools.
What Attention Metrics Actually Measure
Attention measurement is not one thing. It is a category of signals, each with different methodologies and different levels of reliability. Understanding what sits inside that category matters before you decide how much weight to give it.
Eye-tracking studies use panels of real users to measure where on a screen someone actually looks, for how long, and with what fixation depth. These are expensive and panel-based, which means they are not real-time and not at scale. But they produce genuine behavioural data, not proxies.
Active dwell time measures how long an ad is in-view in an active browser environment, with the user engaged on the page rather than tabbed away. This is a proxy for attention, not a direct measure of it, but it is a meaningfully better proxy than standard viewability.
Interaction signals, including scroll velocity, cursor proximity, and touch events on mobile, are used by some platforms and vendors to infer attention. These are the most indirect signals, and they carry the most noise. A cursor hovering near an ad because the user is reading adjacent content is not the same as a user attending to the ad.
Some vendors are beginning to combine multiple signals into composite attention scores. The quality of those scores depends entirely on the methodology behind them, and that methodology is not always transparent. If a vendor cannot tell you clearly what their attention score is made of, treat it with appropriate scepticism.
How Attention Compares to the Metrics Most Teams Are Using
To understand where attention sits, it helps to map the other metrics most media teams rely on and be honest about what each one is actually telling you.
Impressions count how many times an ad was served. They say nothing about whether it was seen, by whom, or in what context. Impressions are a volume metric. They are useful for understanding scale. They are not useful for understanding impact.
Viewability is an improvement on impressions because it filters out ads that were never rendered in-view. But as noted above, the threshold is low and the methodology does not account for human presence or cognitive engagement. A viewable impression in a cluttered feed environment is a very different thing from a viewable impression on a high-quality editorial page, and most viewability reporting treats them identically.
Click-through rate is the metric I find most consistently misleading. I have seen CTR used as a proxy for engagement, interest, intent, and even creative quality. It is rarely a reliable indicator of any of those things. On display, CTR is typically low because most people do not click display ads, not because the ads are failing. On social, CTR can spike for reasons that have nothing to do with purchase intent. Accidental clicks on mobile inflate CTR without adding any value. Buffer’s breakdown of content marketing metrics makes a similar point about treating engagement metrics as outcomes rather than signals.
Reach and frequency are planning constructs. They tell you how many people were served an ad and how often. They do not tell you whether those people were paying attention, whether the message landed, or whether the frequency was helpful or annoying. Frequency capping exists precisely because there is a point at which repetition stops working. But without attention data, you are capping based on exposure counts, not on whether the message has actually been received.
Completion rate for video is a better signal than most display metrics because it requires sustained exposure. But it is still not attention. A video that auto-plays in a muted feed environment and runs to completion because the user scrolled past it slowly is not the same as a video that a user chose to watch with sound on. Platform-reported completion rates often do not distinguish between these scenarios.
When I was at iProspect growing the team from around 20 people to over 100 and managing accounts across multiple verticals, one of the things we worked hard on was helping clients understand what their media metrics were and were not telling them. The instinct from clients was always to want a single number that confirmed the campaign was working. The honest answer was usually that no single number could do that, and that the metrics they were most comfortable with were often the least informative.
Where Attention Metrics Are Genuinely Useful
Attention data is most valuable when you use it to make comparisons rather than to draw absolute conclusions. Knowing that placement A generates 3.2 seconds of average active dwell time versus 0.8 seconds for placement B is actionable. You can shift budget, renegotiate placements, or use that data to inform creative decisions about format and length.
Attention metrics are also useful for evaluating context quality. Premium publishers have long argued that their inventory is worth more than programmatic alternatives. Attention data gives you a way to test that claim empirically rather than taking it on trust. If a premium placement genuinely generates more attention per impression, you can quantify that premium and decide whether it justifies the CPM difference.
For brand campaigns in particular, where the goal is memory encoding rather than immediate response, attention is a more relevant metric than CTR or conversion rate. You are not trying to get someone to click. You are trying to get the brand into their consideration set. Attention is at least directionally related to that goal in a way that viewability alone is not.
There is also a creative dimension here that often gets overlooked. Attention data can tell you whether your creative is holding interest or losing it. If you know that average dwell time drops sharply after the first two seconds of a video, that is a signal about your creative, not just your placement. That kind of feedback loop has real value for production decisions.
The Honest Limitations of Attention Measurement
Attention metrics are better than viewability. They are not a solved problem.
The methodology varies significantly between vendors. There is no agreed standard for what constitutes an attention metric, in the way there is (however imperfect) for viewability. That means comparing attention scores across different platforms or vendors is often not meaningful. You may be comparing apples to something that is not even fruit.
Attention also does not guarantee memory, and memory does not guarantee purchase. The causal chain from attention to business outcome runs through a lot of other variables: message relevance, creative quality, timing, competitive context, and whether the person who saw the ad was ever in the market for what you are selling. Attention is a necessary condition for advertising to work, but it is not sufficient.
I have judged at the Effie Awards, where effectiveness is the only currency that matters. The campaigns that win are not the ones with the highest viewability or the most attention seconds. They are the ones that moved people and moved numbers. Attention is an input to that process, not a measure of its output.
There is also a cost consideration. Proper attention measurement, particularly eye-tracking panel studies, is not cheap. For smaller budgets, the investment in measurement may not be proportionate to the budget being measured. In those cases, using attention as a lens for media planning decisions, based on published research about formats and environments rather than bespoke measurement, is a more practical approach.
Tools like Google Analytics and the GA4 framework covered by Moz can give you on-site behavioural signals, including time on page and scroll depth, that serve as rough proxies for attention in a content context. They are not the same as media attention measurement, but for owned channels they are a reasonable starting point for understanding engagement quality.
How to Use Attention Thinking Without Buying an Attention Platform
Not every team has the budget or the data infrastructure to run formal attention measurement. That does not mean the thinking is inaccessible.
Format choices are a form of attention optimisation. Larger formats generate more attention than smaller ones. Fewer ads on a page generate more attention per ad. Sound-on video generates more attention than muted autoplay. Native placements in relevant editorial contexts generate more attention than run-of-network display. These are not controversial claims. They are consistent findings across the attention research that has been published over the past decade.
Context quality matters. Placing ads adjacent to content that is relevant to your category increases the probability that the person consuming that content is in a receptive state. This is not a new idea. It is the logic behind contextual targeting, which has had something of a renaissance as cookie-based audience targeting has become more restricted.
Reducing clutter in your own media mix is another lever. If you are running six different ad formats across a campaign, the one that consistently generates the lowest engagement signals is probably generating the least attention. Consolidating budget into fewer, higher-quality placements often produces better outcomes than spreading thin across every available format.
Early in my career, I built a website from scratch because the budget did not exist to hire someone to do it. That experience gave me a useful habit of working with the tools and data available rather than waiting for perfect conditions. Attention thinking is the same. You do not need a bespoke measurement platform to apply the principles. You need to ask better questions of the data you already have, and to be honest about what your existing metrics are and are not telling you.
For a broader view of how to build measurement frameworks that connect media metrics to business outcomes, the Marketing Analytics section of The Marketing Juice covers the underlying thinking across channels and tools. The question of which metrics to trust is one that runs through most of the content there.
What a Better Media Metric Stack Looks Like
The goal is not to replace all your existing metrics with attention metrics. It is to build a stack where each metric is doing a different job and you are clear about which job that is.
Impressions and reach tell you about scale. Use them to understand whether you are reaching enough people at a category level, not to evaluate campaign quality.
Viewability is a quality floor. Use it to filter out genuinely low-quality inventory, but do not optimise toward it as a goal. Hitting 70% viewability on cheap inventory is not better than hitting 55% on premium inventory where the remaining 45% were at least in a context where attention was possible.
Attention metrics, where you have access to them, sit above viewability in the quality stack. They tell you something closer to whether the ad had a real chance of working. Use them to compare placements, evaluate formats, and inform creative decisions.
Brand metrics, including awareness, consideration, and preference, tell you whether the campaign moved people. These require survey-based measurement and take longer to show movement, but they are the bridge between media exposure and commercial outcome for brand campaigns.
Sales and revenue outcomes are the end point. Everything else is a leading indicator. The trap is treating leading indicators as if they are outcomes, which is how you end up with campaigns that score well on every media metric and still fail to move the business. Semrush’s breakdown of content marketing metrics touches on this distinction between activity metrics and outcome metrics, which applies equally to paid media.
The Unbounce content metrics guide also makes a useful distinction between metrics that tell you what happened and metrics that tell you why, which is worth keeping in mind when you are trying to diagnose underperformance in any channel.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
