Attention Scores in Ad Measurement: What the Numbers Tell You

Outcome-based attention scores are a method of measuring advertising effectiveness by connecting how much attention an ad receives to a downstream business result, whether that is a sale, a sign-up, or a shift in brand preference. Unlike viewability metrics, which only confirm that an ad appeared on screen, attention scores attempt to quantify whether a person actually processed the message and whether that processing influenced their behaviour. The companies building these tools are making a credible argument: that the industry has been measuring the wrong thing for too long.

They are not entirely wrong. But the leap from “this ad was noticed” to “this ad caused a purchase” is longer than most vendors will admit in a sales deck.

Key Takeaways

  • Attention scores measure cognitive engagement with an ad, not just whether pixels appeared on screen. That is a meaningful improvement over viewability, but it is not the same as proving business impact.
  • Outcome-based attention measurement connects attention data to downstream results. The methodology varies significantly between vendors, and those differences matter when you are making budget decisions.
  • The companies leading this space, including Lumen, Adelaide, and TVision, use different proxies for attention. Understanding what each one actually measures is more important than the headline score they produce.
  • Attention scores work best as a directional signal within a broader measurement framework, not as a standalone proof of ROI.
  • The risk is not that attention measurement is wrong. The risk is that marketers treat a better proxy as if it were the outcome itself.

Why Viewability Was Never Enough

When I was running an agency and managing large programmatic budgets, viewability was the metric everyone pointed to when a client asked whether their ads were working. An ad counted as viewable if 50% of its pixels were on screen for at least one second. For video, it was two seconds. That was the industry standard, and it was almost completely useless as a measure of anything that mattered.

A viewable impression tells you that an ad had the theoretical opportunity to be seen. It says nothing about whether anyone looked at it, processed it, or did anything differently because of it. We were charging clients for opportunity, not outcome. The industry knew this and largely chose not to make it a problem.

Attention measurement emerged from that gap. The argument is straightforward: if we can measure how much cognitive engagement an ad generates, we have a better leading indicator of whether it will drive results. And if we can then correlate that engagement with actual outcomes, we have something genuinely useful.

If you are building out your wider analytics framework, the Marketing Analytics hub on The Marketing Juice covers the broader landscape of measurement tools, attribution models, and how to connect media data to business performance. Attention scores sit within that ecosystem, not above it.

What Outcome-Based Attention Scores Actually Measure

The phrase “outcome-based attention score” sounds precise. In practice, it covers a range of methodologies that measure different things and make different assumptions about the relationship between attention and results.

At the input end, attention measurement typically uses one or more of the following signals: eye-tracking data (either from panels or device cameras), scroll behaviour, cursor movement, time-in-view beyond the viewability threshold, audio engagement, and screen real estate relative to other content. Some vendors use passive device signals. Others recruit panels and use biometric data. A few use machine learning models trained on panel data and applied at scale to predict attention across placements where direct measurement is not possible.

At the output end, the “outcome” in outcome-based attention scoring is usually one of three things: brand recall (measured via survey), purchase intent (also survey-based), or actual conversion data matched through clean rooms or identity graphs. The quality of the outcome measurement varies enormously. A brand recall survey run 24 hours after exposure is a very different thing from a matched sales dataset.

The connection between those two ends, between the attention signal and the outcome, is where the methodological differences between vendors become commercially significant. Some companies build predictive models that estimate the probability of a downstream outcome based on attention signals. Others run correlation analyses across campaigns and draw inferences. A few are building more rigorous causal frameworks using holdout testing. Those are not equivalent approaches, and they should not be evaluated as if they are.

The Companies Building This Space

A handful of companies have built credible positions in outcome-based attention measurement. They approach the problem differently, and understanding those differences is more useful than treating them as interchangeable.

Adelaide produces an Attention Unit (AU) score that aggregates signals across placements, including environment quality, ad format, position on page, and historical performance data. Their model is trained on eye-tracking panels and validated against brand and sales outcomes. The score is designed to be a pre-campaign planning tool as much as a post-campaign measurement one. You can use it to buy attention-weighted inventory before a campaign runs, which is a genuinely different use case from most measurement products.

Lumen Research is one of the longest-established players in this space, built on a large panel of eye-tracking participants. They measure active attention in seconds, distinguishing between whether an eye landed on an ad and how long it remained there. Their data is used extensively in the UK market and has been integrated into planning tools by several major media agencies. The limitation is panel size relative to the scale of programmatic buying, which means their scores are modelled across inventory rather than directly measured everywhere.

TVision focuses on television and connected TV, using opt-in camera technology to measure whether viewers are actually watching during ad breaks. This is a meaningful distinction from traditional TV measurement, which infers viewership from panel data and cannot distinguish between a viewer sitting in front of the screen and one who has left the room. Their outcome data connects attention during TV ad exposure to brand lift and, in some cases, to purchase behaviour through third-party data partnerships.

Amplified Intelligence, an Australian company with growing international presence, uses active attention measurement via mobile device cameras and has published peer-reviewed research connecting their attention metrics to memory encoding and brand outcomes. Their work on the distinction between active attention (eyes on screen, processing the message) and passive attention (eyes on screen but not processing) is one of the more rigorous contributions to this field.

DoubleVerify and IAS (Integral Ad Science) have both moved into attention measurement from their origins in brand safety and viewability verification. Their attention products are built on the scale of their existing measurement infrastructure, which covers a significant proportion of global digital advertising. The risk with both is that their attention scores lean heavily on proxy signals rather than direct eye-tracking, which means they are measuring conditions likely to produce attention rather than attention itself.

The Proxy Problem and Why It Matters

I judged the Effie Awards for a period, which meant reviewing a lot of case studies that claimed to prove marketing effectiveness. One thing you notice quickly is how often a good proxy gets promoted to the status of the thing itself. Share of voice becomes a proxy for brand health. Brand health becomes a proxy for commercial performance. Each step in that chain introduces error, and by the time you are three proxies deep, the connection to actual business outcome is tenuous at best.

Attention scores carry the same risk. Time-in-view is a proxy for attention. Attention is a proxy for memory encoding. Memory encoding is a proxy for purchase intent. Purchase intent is a proxy for purchase. Each of those relationships is real but probabilistic, and the probability degrades at each step.

This does not make attention scores useless. It makes them a better proxy than viewability, which is worth something. But it does mean that a high attention score on a campaign does not guarantee commercial results, and a low attention score does not always explain poor sales performance. There are too many other variables in the chain.

The vendors who are honest about this are the ones worth working with. The ones who present their attention score as a direct measure of ROI are selling a story that the methodology does not support.

How to Use Attention Data Without Overstating It

When I was growing the agency and we started taking performance measurement seriously, the discipline that made the biggest difference was not finding better tools. It was being clearer about what each tool was actually measuring and what decisions it was qualified to inform. Attention data fits that framework.

There are specific decisions where attention scores add genuine value. Creative evaluation is one of them. If you are running multiple creative executions and one consistently generates higher attention scores across placements and audiences, that is useful directional information. It does not prove that the higher-attention creative will drive more sales, but it gives you a defensible basis for prioritising it.

Media planning is another. If attention data shows that a particular format or placement consistently underperforms on attention relative to its cost, that is a rational basis for reallocating budget. You are not optimising for attention as an end in itself. You are removing inventory that is unlikely to generate the cognitive engagement needed for any downstream effect.

Where attention scores are less useful is as a primary success metric for a campaign. If your client or your board asks whether the campaign worked, “we achieved above-average attention scores” is not a sufficient answer. It is a useful supporting data point within a broader measurement framework that includes actual business outcomes.

Connecting your media measurement to analytics platforms is part of making this work in practice. Proper UTM tracking and campaign tagging in Google Analytics allows you to link attention-weighted media buys to on-site behaviour, which gets you closer to the outcome side of the equation. And if you are running tests across creative or placements, structured A/B testing in GA4 gives you a framework for connecting attention differences to measurable downstream differences.

The Incentive Problem in Attention Measurement

There is a structural issue in this market that does not get discussed enough. Many of the companies selling attention measurement are also selling media, or selling tools to optimise media buying. That creates an incentive to define attention in a way that makes their inventory or their platform look favourable.

This is not a conspiracy. It is a normal feature of markets where the measurer has a commercial interest in the measurement. It is the same reason you should treat any attribution model built by a platform as a document that will tend to attribute value to that platform. The incentives shape the methodology, even when the people building it are acting in good faith.

The practical implication is that you should prefer attention measurement vendors who are independent of media buying, and you should look for vendors who publish their methodology in enough detail that it can be evaluated externally. Peer-reviewed research is a higher bar than a white paper, and a white paper is a higher bar than a case study on the vendor’s own website.

Understanding how to evaluate the credibility of measurement data is a core analytics skill. The distinction between marketing analytics and web analytics is relevant here: web analytics tells you what happened on your site, marketing analytics tells you whether your marketing caused it. Attention scores sit in the marketing analytics layer, and they need to be evaluated with the same scepticism you would apply to any other marketing measurement claim.

Where Attention Measurement Fits in a Broader Measurement Stack

The most useful way to think about attention scores is as one layer in a measurement stack, not as a replacement for other layers. A complete measurement framework for a media campaign might include: viewability and brand safety verification at the bottom, attention scores in the middle, and brand lift or sales data at the top. Each layer tells you something different, and the combination is more informative than any single metric.

In practice, most organisations do not have all three layers connected. They have viewability data from their DSP, attention scores from a third-party vendor if they are using one, and sales data sitting in a separate system that no one has successfully linked to media exposure. The gap between media measurement and business outcome measurement is where most of the value is being lost, and attention scores do not solve that gap on their own.

What they can do is improve the quality of the media layer. If you are buying programmatic inventory and you have no signal beyond viewability to distinguish between placements, attention data gives you a better basis for selection. That is a meaningful improvement. It is just not the same as measuring whether your advertising is working in a business sense.

For email campaigns, where attention is measured differently but the principle is the same, email marketing reporting frameworks offer a useful parallel: open rates are a proxy for attention, click rates are a proxy for engagement, and conversions are the outcome. The relationship between them is probabilistic, not deterministic. Attention scores in display and video advertising follow the same logic.

Content performance follows similar patterns. Content marketing metrics have long struggled with the same problem: measuring inputs and proxies rather than business outcomes. The attention measurement industry is grappling with a version of the same challenge, and the solutions being developed there are worth watching for what they might teach us about content measurement too.

If you want to go deeper on how measurement frameworks connect across channels and tools, the Marketing Analytics section of The Marketing Juice covers attribution models, GA4 implementation, and how to build a measurement approach that connects media activity to commercial outcomes rather than stopping at engagement proxies.

What Good Looks Like

The best use of attention measurement I have seen is when it is used to challenge assumptions about media quality rather than to validate existing buying decisions. When an agency uses attention data to tell a client that a significant portion of their premium display spend is generating below-average attention scores, and then restructures the buy accordingly and tracks the downstream effect, that is measurement being used properly.

The worst use is when attention scores become the new vanity metric: the thing that gets reported in the monthly deck because it looks good and no one can easily challenge it. I have sat in enough client meetings to know that a metric which is hard to understand is also hard to hold anyone accountable for. That is not always an accident.

The question worth asking of any attention measurement vendor is simple: show me a campaign where your attention scores predicted poor outcomes, and show me what happened when the client acted on that data. If they can only show you cases where high attention scores correlated with good results, they are showing you a selected sample. Every measurement methodology looks good when you only present the confirming cases.

GA4 has introduced new ways to think about engagement measurement on the web side. The engagement metrics in GA4 represent a shift away from bounce rate toward a more nuanced picture of whether users are actually engaging with content. That shift in web analytics is philosophically aligned with what attention measurement is trying to do in paid media: replace a blunt binary metric with something that better reflects actual human behaviour.

And if you are thinking about how attention data feeds into content decisions, using GA4 data to shape content strategy is a useful adjacent skill. Understanding which content formats and topics generate sustained engagement, rather than just traffic, is the content equivalent of attention measurement in paid media.

The Honest Assessment

Outcome-based attention scores represent a genuine step forward from viewability as a measure of media quality. The companies building this space are solving a real problem, and the methodology, where it is rigorous, produces data that is more useful than what came before.

But the gap between “this ad was noticed” and “this ad drove business results” remains large, and the measurement industry has a long history of selling proxies as if they were outcomes. The attention measurement market is young enough that the incentives have not fully calcified yet. That is worth something. It means there is still space for rigorous, independent methodology to win.

For most marketing teams, the practical question is not whether to use attention measurement. It is how to use it honestly: as a better signal for media quality decisions, as one input into a broader measurement framework, and as a useful challenge to assumptions about which placements and formats are actually working. Used that way, it earns its place in the stack. Used as a substitute for measuring actual business outcomes, it becomes another layer of comfortable noise.

Fix the measurement, and most of the hard decisions in marketing become clearer. That is true of attention scores as it is of every other metric. The question is always whether you are measuring the thing, or measuring a shadow of the thing, and whether you are honest with yourself about the difference.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is an outcome-based attention score in advertising?
An outcome-based attention score is a metric that connects how much cognitive engagement an ad generates with a downstream business result, such as a purchase, brand recall, or shift in intent. It goes beyond viewability, which only confirms that an ad appeared on screen, by attempting to measure whether the ad was actually processed by the viewer and whether that processing influenced behaviour.
Which companies offer outcome-based attention measurement?
The leading companies in this space include Adelaide, Lumen Research, TVision, Amplified Intelligence, DoubleVerify, and Integral Ad Science. Each uses different methodologies: some rely on eye-tracking panels, others on device signals or machine learning models trained on panel data. The methodology differences are commercially significant and should be evaluated before selecting a vendor.
How is attention measurement different from viewability?
Viewability confirms that a minimum proportion of an ad’s pixels were on screen for a minimum amount of time. It measures opportunity, not engagement. Attention measurement attempts to determine whether a person actually looked at the ad and processed its content. The distinction matters because an ad can be viewable without being noticed, and viewability metrics have historically been used to justify spend that generated no meaningful engagement.
Can attention scores replace traditional ROI measurement?
No. Attention scores are a better proxy for media quality than viewability, but they do not replace measurement of actual business outcomes. The relationship between attention and purchase is probabilistic and involves multiple intermediate steps. Attention scores work best as one layer within a broader measurement framework that includes brand lift data, sales attribution, and business performance metrics.
What should I look for when evaluating an attention measurement vendor?
Look for vendors who publish their methodology transparently, who are independent of media buying (to avoid conflicts of interest), and who can show cases where their attention data predicted poor outcomes, not just cases where high scores correlated with good results. Peer-reviewed research is a higher bar than a vendor white paper. Ask specifically how the connection between attention signal and downstream outcome is established, and whether it uses correlation or a more rigorous causal framework.

Similar Posts