Attention Metrics for Advertising: What They Measure and How to Start

Attention metrics measure whether an ad was actually seen and processed, not just whether it appeared on screen. Unlike viewability, which only confirms that pixels were present, attention data tells you something closer to what a human brain did with your creative in the moment it was served.

Getting started does not require a complete overhaul of your measurement stack. It requires understanding what attention metrics are, where they fit alongside your existing data, and how to act on them without overcorrecting on the basis of a single new signal.

Key Takeaways

  • Attention metrics sit above viewability and below brand recall in the measurement hierarchy. They fill a specific gap, not the whole gap.
  • Active attention seconds, eyes-on-screen time, and audibility are the three most commonly available attention signals, and they measure different things.
  • Attention data is most useful when layered against outcomes, not used as a standalone quality score.
  • You can start building attention intelligence without buying a dedicated platform, by using publisher-level attention data and creative diagnostics from your existing DSP.
  • High attention does not automatically mean high conversion. The relationship between attention and business outcomes depends heavily on category, creative, and audience context.

Why Viewability Was Never Enough

The advertising industry spent years fighting for viewability standards. The IAB definition, 50% of pixels in view for one second for display, two seconds for video, was a genuine improvement over served impressions. But it set a floor so low that most of the industry quietly knew it was not measuring much of anything useful.

I spent a significant chunk of my career running performance campaigns across multiple verticals. The viewability numbers on reports always looked fine. Respectable, even. And yet the relationship between those viewability scores and actual business outcomes was weak at best. You could have a campaign running at 85% viewability and see almost no brand recall lift and no meaningful lift in direct response either. The metric told you the ad existed. It did not tell you whether anyone noticed it.

Attention metrics attempt to answer the question viewability could not: was the ad actually processed? That is a harder question to answer, and the methodology for answering it varies considerably across vendors. But the direction of travel is right.

If you are already thinking carefully about how your analytics infrastructure is built, the broader context around marketing analytics and measurement is worth working through systematically. Attention data is one layer of a larger measurement picture.

What Attention Metrics Actually Measure

There is no single, universally agreed definition of an attention metric. Different vendors measure different things and call them all attention. Before you start using any of this data, you need to understand what is actually being captured.

The most common signals in the market right now fall into a few categories.

Active Attention Seconds

This measures the cumulative time a user is estimated to have been actively engaged with an ad, typically combining eye-tracking data (from panel studies) with proxy signals like scroll speed, cursor movement, tab focus, and interaction events. It is the signal most often used in attention-based buying models. Vendors like Adelaide, Lumen, and Playground xyz each have their own methodologies for producing this number.

Eyes-On-Screen Time

This is derived from eye-tracking panels, typically opt-in research participants whose gaze patterns are used to build predictive models at scale. It is more accurate than proxy-based attention but is always a modelled estimate when applied outside the panel. You are not measuring where individual users look. You are applying a probability model trained on panel data.

Audibility and Audio Attention

For video and audio formats, whether sound was on and at what volume is a separate signal worth tracking. An ad viewed in complete silence is a materially different experience from one heard at full volume. Most DSPs surface this as an audibility rate, but it is rarely used as a primary optimisation signal.

Interaction-Based Proxies

Hover time, scroll depth, time-in-view, and ad interaction rates are all used as proxies for attention by platforms that do not have access to eye-tracking data. These are the most widely available signals and the least reliable. They correlate with attention in some contexts and not at all in others. A long time-in-view on a slow-loading page tells you very little.

Understanding the difference between these signals matters before you act on any of them. Data-driven marketing only works when you know what your data actually represents.

Where Attention Fits in Your Measurement Stack

Attention metrics sit in the middle of the measurement hierarchy. Below them: served impressions and viewability. Above them: brand recall, consideration, and conversion. Attention is a leading indicator, not an outcome. That distinction matters enormously for how you use it.

When I was judging the Effie Awards, one of the things that separated strong entries from weak ones was the quality of the measurement chain. The best campaigns did not just show that awareness went up. They showed a coherent story: the media was delivered in environments where it was likely to be noticed, the creative was designed to perform in those environments, and the business outcome followed. Attention metrics, used well, are one link in that chain.

Used poorly, they become another vanity metric. I have seen attention scores used to justify campaigns that were not working on any commercially meaningful measure. High attention, no sales. The attention number became a defence rather than a diagnostic.

The right way to position attention in your stack is as a quality signal for media and creative, not as a substitute for outcome measurement. It tells you something about the conditions under which your message was delivered. It does not tell you whether the message worked.

How to Get Your First Attention Data Without a New Platform

You do not need to sign a contract with an attention measurement vendor on day one. There are several ways to start building attention intelligence using tools and data sources you likely already have access to.

Use Publisher-Level Attention Data

Several major publishers now provide attention data as part of their standard reporting. Some have integrated third-party attention measurement into their inventory. Ask your publisher partners directly what attention signals they can provide alongside standard delivery metrics. You may find you already have access to time-in-view, audibility rates, or interaction data that you have not been using.

Pull Creative Diagnostics from Your DSP

Most DSPs surface time-in-view and interaction rate at the creative level. These are imperfect proxies, but they are available now and cost nothing extra. Start by pulling these metrics by creative unit and placement type. Look for patterns: which formats hold attention longer, which placements produce higher interaction rates, which creative lengths perform differently across environments.

This is not sophisticated attention measurement. But it is the beginning of a discipline, and it is better than ignoring the signal entirely.

Run a Controlled Creative Test Against an Attention Baseline

If you want to understand how attention-optimised creative performs relative to your standard creative, run a structured A/B test with attention metrics as a primary measurement variable alongside your normal outcome metrics. Keep the media environment constant. Change only the creative. This gives you a clean read on whether attention-optimised creative actually produces better downstream results in your specific context.

I have seen this kind of test run well and run badly. The ones that ran badly usually changed too many variables at once and then attributed the outcome to attention when it could equally have been explained by audience differences or placement changes. Control the test properly or the results are not usable.

Request an Attention Audit from a Specialist Vendor

Several attention measurement companies offer audit products that apply their methodology retrospectively to a set of campaigns. This can give you an attention baseline without committing to an ongoing measurement contract. It is a reasonable first step if you want independent data rather than relying solely on publisher or DSP reporting.

How to Use Attention Data Once You Have It

Having attention data and knowing what to do with it are different things. Here is how to make it operationally useful.

Score Your Media Environments

Use attention data to build a quality score for each media environment in your plan. Placements with consistently low attention scores are candidates for reallocation, regardless of their cost efficiency on a CPM basis. A cheap impression that nobody processes is not efficient. It is waste with a low unit cost.

Early in my agency career, we were obsessed with CPM efficiency. We drove CPMs down and wondered why brand metrics were not moving. The problem was not the cost. It was the quality. Cheap inventory tends to be cheap for a reason, and attention data makes that visible in a way that viewability never quite did.

Feed Attention Scores Back into Creative Development

If you are running multiple creative variants, attention data should inform which executions get scaled. Not as the only signal, but as one input alongside click-through rate, conversion rate, and brand recall where available. Creative that generates high attention but low conversion tells you something different from creative that generates low attention and low conversion. The diagnostic is different and so is the fix.

Build Attention into Your Campaign Briefs

If you are briefing media and creative teams separately, attention is a useful shared language. The media team can brief against attention-weighted environments. The creative team can brief against the attention characteristics of those environments: how long the average attention window is, whether audio is typically on, what the typical scroll speed is. This alignment between media and creative is where attention data creates the most practical value.

Most of the campaign failures I have seen over the years came from exactly this disconnect: creative built for a viewing experience that the media plan never actually delivered. Attention data, shared between both teams, closes that gap.

Do Not Optimise to Attention Alone

This is the most important operational caution. Attention metrics are a quality signal, not an outcome metric. If you start optimising media buying purely to maximise attention scores, you will likely end up in a small set of high-attention environments that may not be where your audience actually is, or where your category has any relevance. Optimise to attention as a quality floor, not as a ceiling.

The parallel with other metrics is instructive. Conversion tracking has been available in paid search for a long time, as this early coverage of Google’s conversion tracking tools shows. And yet over-optimising to last-click conversions created its own set of distortions. Any single metric, including attention, will be gamed or misapplied if it becomes the only thing you optimise to.

The Relationship Between Attention and Business Outcomes

There is a genuine and growing body of evidence that attention correlates with brand recall and, in some categories, with purchase intent. The relationship is not linear and it is not universal. Category matters. Creative quality matters. The role of the ad in the consumer experience matters.

What the evidence does not support is the claim that maximising attention will automatically maximise business outcomes. Attention is a necessary condition for advertising to work. It is not a sufficient one. You still need the right message, in the right context, for the right audience, at the right moment in their decision process.

I have seen this confusion play out in pitch presentations more times than I can count. A vendor shows a correlation between attention scores and sales uplift from a handful of case studies and presents it as proof that their attention product drives sales. The causality is murkier than that. High-quality creative in high-quality environments tends to generate both high attention and better outcomes. But you cannot isolate attention as the causal variable without controlling for creative quality and environment quality separately.

This is not an argument against using attention metrics. It is an argument for using them with appropriate scepticism, which is the right posture for any metric. The broader discipline of content and campaign metrics has the same challenge: correlation between metrics and outcomes is common, causality is harder to establish.

Building Attention Into Your Reporting Without Overcomplicating It

One of the mistakes I see teams make when they adopt a new metric is building it into every report immediately, at every level of granularity, before they understand what it means in their specific context. Attention data added to a report before anyone knows how to interpret it just adds noise.

Start with a single view: attention quality by placement type, updated monthly. Track it alongside your existing outcome metrics for three to six months. Look for patterns. Do higher-attention placements produce better conversion rates in your campaigns? Does creative with higher attention scores produce better recall in your brand tracking? Build the evidence base in your own context before making structural changes to your media plan on the basis of attention data.

When the pattern is clear, then build it into your standard reporting. Add it to your campaign dashboards. Include it in your agency briefs. Make it a standard question in your creative reviews. But earn that integration through evidence, not through enthusiasm for a new metric.

If you are thinking about how attention data fits into a broader analytics and reporting framework, the marketing analytics hub covers the infrastructure questions that sit underneath this kind of measurement work.

Common Mistakes When Starting With Attention Metrics

A few patterns come up repeatedly when teams start using attention data for the first time.

The first is treating attention scores as comparable across vendors. They are not. Different vendors use different methodologies, different panel compositions, and different proxy signals. An attention score from one vendor cannot be directly compared to an attention score from another. If you switch vendors, you lose your historical baseline.

The second is using attention data to justify decisions that have already been made for other reasons. I have seen attention scores used post-hoc to defend a media plan that was built on relationships and rate card negotiations, with the attention data selected to support the conclusion rather than inform it. That is not measurement. That is theatre.

The third is applying attention benchmarks from one category or format to another without adjustment. Attention norms for video in a premium editorial environment are not the same as attention norms for display in a news aggregator. Use category-specific and format-specific benchmarks where they are available.

The fourth is ignoring the creative side of the equation entirely. Attention data is often discussed as a media quality signal, but creative is at least as important in determining whether an ad gets noticed. A strong creative in a low-attention environment will often outperform a weak creative in a high-attention environment. Media and creative need to be evaluated together.

For teams thinking about how all of this connects to email and owned channel measurement, HubSpot’s email marketing reporting guidance and this breakdown of email marketing metrics are worth reviewing alongside your paid media attention work. Attention is not a concept unique to paid advertising, it applies anywhere you are competing for someone’s time.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

What are attention metrics in advertising?
Attention metrics measure whether an ad was actually seen and cognitively processed, not just whether it appeared on screen. They go beyond viewability by capturing signals like eyes-on-screen time, active attention seconds, audibility, and interaction-based proxies. Different vendors define and measure attention differently, so it is important to understand the methodology behind any attention data you use.
How is attention different from viewability?
Viewability confirms that an ad’s pixels were present on screen for a minimum threshold of time. It says nothing about whether a human being noticed or processed the ad. Attention metrics attempt to measure actual human engagement with the ad, using eye-tracking data, panel research, and behavioural proxy signals. Viewability is a delivery confirmation. Attention is a quality signal.
Do I need a specialist vendor to start using attention metrics?
No. You can start building attention intelligence using data you already have access to. Most DSPs surface time-in-view and interaction rates at the creative and placement level. Many publishers provide attention data as part of standard reporting. A structured creative test using these proxy signals, combined with your existing outcome metrics, is a reasonable starting point before committing to a dedicated attention measurement platform.
Does high attention mean better advertising results?
Not automatically. Attention is a necessary condition for advertising to work, but it is not sufficient on its own. Creative quality, message relevance, audience context, and where someone is in their decision process all affect whether attention translates into recall, consideration, or purchase. High attention with a weak or irrelevant message will not produce strong business outcomes. Use attention as a quality floor for media, not as a substitute for outcome measurement.
Can attention scores from different vendors be compared?
No. Different vendors use different methodologies, panel compositions, and proxy signals to produce attention scores. A score from one vendor is not directly comparable to a score from another. If you switch vendors, you also lose your historical baseline for benchmarking. Choose a methodology and stick with it long enough to build a meaningful data set before drawing conclusions or making structural changes to your media plan.

Similar Posts