Attention Metrics: What They Tell Media Planners
Attention metrics measure how much cognitive engagement an ad receives, not just whether it appeared on screen. Where viewability tells you an ad had the opportunity to be seen, attention data tells you whether a real person processed it. That distinction matters more than most media plans currently reflect.
The shift from exposure-based to attention-based planning is not a trend. It is a correction. Media planning has been optimising for the wrong signals for years, and the industry is slowly catching up to what experienced practitioners already suspected.
Key Takeaways
- Viewability measures opportunity, not engagement. Attention metrics are a closer proxy for whether an ad actually landed.
- High-viewability placements routinely produce low attention. The two metrics do not correlate as reliably as media plans assume.
- Attention data is most useful as a planning input, not a campaign KPI. It informs channel and format decisions before spend is committed.
- Attention varies by format, context, and creative quality. Generic benchmarks without creative context are of limited use.
- Attention metrics do not replace brand tracking or commercial outcome measurement. They add a layer of diagnostic clarity between exposure and result.
In This Article
- Why Viewability Was Never Enough
- What Attention Metrics Actually Measure
- How Attention Data Changes Channel Allocation Decisions
- The Relationship Between Attention and Creative Quality
- Attention Metrics as a Planning Input, Not a Campaign KPI
- Where Attention Metrics Have Real Operational Value
- The Honest Limitations
- Bringing Attention Data Into Your Planning Process
Why Viewability Was Never Enough
Viewability became the industry’s proxy for effectiveness because it was measurable, reportable, and easy to put in a deck. The standard definition, 50% of pixels in view for one second for display, set a bar so low it was almost meaningless. You could hit 100% viewability on a placement nobody looked at.
I spent years reviewing media plans where viewability was cited as evidence of quality. When you pushed on what actually happened after the impression, the answer was usually a shrug and a click-through rate that told you almost nothing about whether the message landed. The metric was there to satisfy a reporting requirement, not to answer a commercial question.
The problem is structural. Digital media was sold on measurability, and measurability required metrics. When the available metrics were exposure-based, exposure-based metrics became the standard. The industry built its buying, selling, and reporting infrastructure around them. Changing that infrastructure is slow and commercially inconvenient for everyone involved in selling impressions.
Attention metrics introduce friction into that system, which is partly why adoption has been uneven. They require more sophisticated measurement, they surface uncomfortable truths about placements that look good on paper, and they shift negotiating power slightly toward buyers who understand what they are looking at.
What Attention Metrics Actually Measure
Attention measurement approaches vary, but the core inputs typically include time-in-view, interaction signals, scroll behaviour, eye-tracking data from panel studies, and increasingly, computer vision analysis of whether a user’s gaze was directed at the screen. Some vendors combine these into a composite attention score. Others report them separately.
None of these approaches are perfect. Eye-tracking panels are small and may not reflect the full range of viewing contexts. Interaction signals conflate accidental engagement with intentional engagement. Composite scores obscure which inputs are driving the number. These are real limitations, and any media planner treating an attention score as a hard fact rather than a directional indicator is making the same mistake the industry made with viewability.
That said, directional indicators are genuinely useful. When I was managing large-scale media programmes across multiple verticals, the biggest planning errors rarely came from getting the precise number wrong. They came from planning teams using the wrong signal entirely, optimising for cost-per-click on a brand awareness campaign, for example, or treating reach as a success metric when the brief was conversion. Attention metrics, used honestly, point planning teams toward the right questions even when they cannot answer those questions with precision.
The distinction between passive exposure and active processing is the useful insight here. A user who scrolls past a mid-feed ad in 0.3 seconds on a mobile device has technically been exposed. A user who pauses, reads the copy, and moves on has had a materially different experience. Attention metrics try to capture that difference. They do not always succeed, but the attempt is more commercially honest than pretending the distinction does not exist.
For teams working through the broader mechanics of how planning tools and measurement frameworks fit together, the Marketing Operations hub covers the operational infrastructure behind effective marketing programmes.
How Attention Data Changes Channel Allocation Decisions
The most immediate application of attention metrics in media planning is channel and format selection. When you have attention data across placements, you can compare the quality of exposure, not just the cost of it. Cost-per-thousand-impressions becomes less useful as a primary buying metric when you know that two placements with identical CPMs deliver radically different levels of engagement.
Attention data tends to reveal a few consistent patterns. Long-form editorial environments generally produce higher attention than social feed placements. Audio with visual elements tends to outperform display. Placements where the ad is contextually relevant to the surrounding content perform better than run-of-network buys. None of this is surprising to experienced practitioners, but having data behind it changes the conversation with clients and with media owners.
When I was building out the media capability at iProspect, one of the recurring tensions was between what the numbers said and what clients wanted to believe about their media mix. Clients often had legacy commitments to channels that looked efficient on a cost-per-exposure basis but were delivering very little in terms of measurable commercial outcomes. Introducing attention data into those conversations gave us a more credible basis for recommending change than simply arguing from instinct.
The same logic applies to format decisions. Skippable video, for example, produces attention data that is quite different from non-skippable pre-roll. If a significant proportion of users skip at the first opportunity, the attention delivered by that placement is concentrated in the first few seconds. That has direct implications for creative, for message hierarchy, and for whether the format is appropriate for the campaign objective at all.
Effective media planning has always been partly about matching the right environment to the right message. Attention data gives that matching process a more empirical foundation. It does not replace editorial judgement, but it reduces the amount of planning that happens on assumption alone.
The Relationship Between Attention and Creative Quality
One thing attention metrics make visible that viewability never could is the interaction between placement quality and creative quality. A strong creative in a low-attention environment will underperform. A weak creative in a high-attention environment will also underperform. Attention data, when cut by creative variant, can help diagnose which problem you are dealing with.
This matters because media planning and creative development are too often treated as separate workstreams. The media team buys the space. The creative team fills it. Neither team has full visibility into how the other’s decisions affect outcomes. Attention metrics create a shared signal that both teams can use to have a more integrated conversation.
The Effie judging process gave me a useful perspective on this. The entries that consistently performed well in effectiveness categories were almost always ones where media and creative decisions had been made together, with a shared understanding of the environment the audience would encounter the work in. The entries that struggled were often technically competent on both dimensions but disconnected in execution. Attention data is one mechanism for forcing that integration earlier in the planning process.
There is also a creative optimisation application. If attention data shows that a particular ad variant is generating significantly more dwell time than others in the same placement, that is useful information for both the current campaign and future creative development. It is not a substitute for brand tracking or sales data, but it is a faster feedback signal than waiting for quarterly brand health results.
Privacy considerations shape how some of this data can be collected and used. Mailchimp’s privacy guide for marketers covers the compliance landscape that affects data collection across channels, which is relevant context when building measurement frameworks that include behavioural signals.
Attention Metrics as a Planning Input, Not a Campaign KPI
There is a version of attention metric adoption that goes wrong, and it is worth naming it directly. When attention scores become primary campaign KPIs, they create the same problem as any other proxy metric: teams optimise for the number rather than the outcome the number is supposed to represent.
Attention is useful as a planning input. It helps you select better environments, negotiate more intelligently with media owners, and make more informed decisions about format and creative. It is a diagnostic tool. When it becomes the primary success metric, you start making decisions that improve the attention score without necessarily improving the commercial result, which is a version of Goodhart’s Law applied to media planning.
The right framework is to treat attention data as one layer in a measurement stack. Exposure metrics tell you whether the ad had the opportunity to be seen. Attention metrics tell you whether it was likely processed. Brand tracking tells you whether it shifted perception. Sales and commercial data tell you whether it drove behaviour. No single layer tells the full story. Each layer adds diagnostic value when interpreted in context.
I have seen this play out in practice. Campaigns that looked strong on attention metrics but failed to move brand metrics were often running in environments where the audience was engaged but not in the right frame of mind for the category. Context matters in ways that attention scores do not always capture. A high-attention placement in a news environment during a difficult news cycle is not necessarily the right place to run a light-hearted consumer campaign, regardless of what the attention data says about the quality of the placement in isolation.
This connects to a broader point about how measurement frameworks should be structured. Forrester’s work on marketing planning transformation has long argued for moving away from activity metrics toward outcome metrics. Attention sits somewhere between the two, which is why its role in the measurement stack needs to be defined carefully before it is introduced into a planning process.
Where Attention Metrics Have Real Operational Value
Beyond channel selection and creative diagnostics, attention data has practical value in a few specific operational contexts.
The first is media owner negotiations. If you have attention data showing that a publisher’s placements consistently underperform the category benchmark, you have a basis for renegotiating rates or requesting better positions. Most media owners will not volunteer this information. Having independent measurement gives buyers more leverage in conversations that have historically been conducted on the seller’s terms.
The second is budget allocation across a mixed media plan. When you are running activity across multiple channels with different attention profiles, you can use attention data to weight your investment toward environments that are more likely to deliver genuine engagement. This is not a mechanical exercise, and it should not be treated as one, but it gives the allocation decision a more defensible empirical basis than gut feel or historical spend patterns.
The third is category benchmarking. Attention norms vary significantly by category, audience, and platform. A financial services brand running in a news environment will see different attention profiles than an FMCG brand running in a social feed. Having category-specific benchmarks rather than generic industry averages makes the data more actionable.
The fourth is frequency management. Attention data can inform decisions about optimal frequency by showing at what point repeated exposure starts producing diminishing returns in terms of engagement. This is a more nuanced input than the blunt frequency caps that most campaigns apply by default.
Teams building out their marketing operations infrastructure, including how measurement frameworks connect to planning and buying processes, will find relevant context across the Marketing Operations section of this site.
The Honest Limitations
Attention metrics are not a solution to the fundamental measurement problem in marketing. They are a more honest proxy than viewability, but they are still a proxy. The gap between attention and commercial outcome is real and not always predictable.
The methodological inconsistency across vendors is also a genuine problem. Different attention measurement providers use different inputs, different weighting approaches, and different scoring methodologies. Comparing attention scores across vendors without understanding the underlying methodology is unreliable. The industry has not yet converged on a standard, which means buyers need to understand what they are buying when they invest in attention measurement.
There is also the question of what attention measurement does not capture. It does not capture emotional response. It does not capture whether the message was understood. It does not capture whether the brand was correctly attributed. A user can pay sustained attention to an ad and still walk away with no useful memory of the brand. Attention is a necessary condition for effectiveness, not a sufficient one.
Privacy regulation adds another layer of complexity. As GDPR and related frameworks continue to shape what behavioural data can be collected and how it can be used, some of the more granular attention measurement approaches face compliance constraints. This is not a reason to avoid attention metrics, but it is a reason to understand the data collection methodology behind any measurement product before embedding it in a planning process.
The broader point is that attention metrics are a genuine improvement over what came before, and they are worth incorporating into media planning, but they should be treated with the same critical scrutiny as any other measurement tool. The industry has a habit of cycling through new metrics with excessive enthusiasm and then abandoning them when they fail to deliver certainty. Attention metrics will not deliver certainty. They will deliver better-informed decisions, which is a more realistic standard.
Privacy considerations also affect how attention and engagement data can be collected across SMS, email, and digital channels. Mailchimp’s SMS privacy policy guidance is a useful reference for teams building compliant measurement frameworks across owned and paid channels.
Bringing Attention Data Into Your Planning Process
The practical question for most planning teams is not whether attention metrics are theoretically valid. It is how to incorporate them into an existing planning process without creating unnecessary complexity or over-investing in measurement infrastructure that does not improve decisions.
A reasonable starting point is to use attention benchmarks at the channel and format selection stage, before spend is committed. Most of the major attention measurement providers publish category-level benchmarks that are sufficient for this purpose without requiring bespoke measurement on every campaign.
For larger campaigns or those with significant brand-building objectives, investing in campaign-level attention measurement makes sense. The data is most useful when it can be cut by creative variant, placement type, and audience segment, which requires some upfront planning about how the measurement will be structured.
The team structure question matters too. Attention data is most useful when it is visible to both media planners and creative teams simultaneously. If it sits only in the media team’s reporting, the creative implications will be missed. Optimizely’s thinking on brand marketing team structure touches on how integrated measurement visibility affects the quality of planning decisions across functions.
Finally, be realistic about what the data will tell you. Attention metrics will help you make better channel and format decisions. They will give you a more credible basis for media owner negotiations. They will surface creative performance signals faster than brand tracking. They will not tell you whether your campaign will work. That still requires a clear brief, a coherent strategy, and creative work that is worth paying attention to in the first place.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
