Self-Serve TV Advertising: What the Measurement Tells You

Self-serve TV advertising platforms have made it possible for mid-market brands to run connected TV and linear campaigns without a media agency sitting in the middle. The measurement these platforms provide, however, is a different matter. Most of it is directionally useful, structurally limited, and far more optimistic than the underlying reality warrants.

Understanding what these platforms actually measure, where attribution breaks down, and how to build a more honest picture of TV’s contribution is the difference between spending confidently and spending on faith.

Key Takeaways

  • Self-serve TV platforms report on the audiences they reach, not the outcomes they cause. The gap between the two is where most measurement problems live.
  • Last-touch attribution systematically undervalues TV because TV rarely sits at the end of a conversion path. It works further up the funnel, where most attribution models stop looking.
  • Incrementality testing is the most reliable way to isolate TV’s true contribution, but it requires holding back audience segments and accepting short-term efficiency loss to get honest data.
  • Platform-reported metrics like view-through conversions and reach are useful for optimisation but should never be treated as proof of business impact.
  • The brands getting the most from self-serve TV are the ones treating it as a brand channel with performance accountability, not a performance channel with TV inventory.

I spent a significant portion of my agency career arguing about TV attribution with clients who had already made up their minds. The ones who believed TV worked could always find a metric to prove it. The ones who were sceptical could always find a gap in the data to justify cutting it. Neither position was especially rigorous. What most of them lacked was a framework for honest approximation, which is not the same as perfect measurement and is far more valuable than the false precision these platforms tend to sell.

What Self-Serve TV Platforms Actually Measure

Platforms like The Trade Desk, Amazon DSP, Roku OneView, and MNTN have made connected TV (CTV) inventory accessible to brands that previously could not afford the minimum commitments of a traditional TV buy. That accessibility is genuinely valuable. The measurement layer that comes with it is more complicated.

What these platforms measure well: impressions served, completion rates, frequency, reach across devices, and household-level targeting. These are real signals. A campaign that consistently achieves high completion rates at the right frequency, against a well-defined audience, is doing something right at the delivery level.

What they measure badly: causation. The core attribution problem with TV, whether connected or linear, is that exposure happens in a living room and conversion happens later, often on a different device, through a different channel, at a different time. The platform sees the impression. It does not see the customer’s subsequent search, the word-of-mouth recommendation, the in-store visit, or the direct navigation three weeks later. It connects dots that are not always connected, and it does so in a way that flatters its own contribution.

View-through attribution is the most common version of this problem. A user sees a CTV ad, does nothing immediately, then converts through a search click four days later. The TV platform counts that as a view-through conversion. The search platform counts it as a search conversion. The brand’s total attributed conversions are now higher than the actual number of conversions. This is not a bug in any individual platform’s reporting. It is the structural reality of multi-touch environments, and it is worth understanding before you build a business case on platform-reported numbers.

If you want a grounded view of how attribution theory applies across channels, attribution theory in marketing is worth reading before you commit to any measurement approach for TV.

Why Last-Touch Attribution Fails TV Campaigns

TV is a reach medium. It builds awareness, creates familiarity, and shifts brand preference over time. It is not designed to be the last thing someone sees before they convert. Measuring it with last-touch attribution is a category error, like judging a chef on how quickly they plate the food rather than how it tastes.

I saw this play out repeatedly when I was running agency teams managing large performance budgets. A brand would run a TV campaign, see their branded search volume increase, watch their direct traffic lift, notice their conversion rates on paid social improve. Then they would look at their last-touch attribution report and conclude that TV had contributed almost nothing, because it almost never appeared at the end of the conversion path. The budget would get cut. Branded search would decline. Paid social efficiency would drop. The connection would be missed entirely.

This is not a hypothetical scenario. It is one of the most common attribution errors in performance marketing, and self-serve TV platforms have not solved it. They have, in some cases, made it worse by offering view-through attribution windows that are generous enough to capture conversions that have nothing to do with the TV exposure.

The same structural problem affects other channels that work higher in the funnel. If you have thought carefully about inbound marketing ROI, you will recognise the pattern: channels that create demand are consistently undervalued by attribution models designed to credit demand capture.

Incrementality Testing: The Honest Alternative

Incrementality testing is the closest thing to a reliable answer when it comes to TV attribution. The principle is straightforward: split your audience into an exposed group and a holdout group, run your TV campaign to the exposed group, and measure the difference in outcomes between the two. The lift you observe in the exposed group, above what the holdout group does organically, is your incremental impact.

In practice, this is harder than it sounds. CTV platforms vary significantly in their ability to support clean holdout tests. Some offer geo-based testing, where you run campaigns in some markets and hold back others. Some offer household-level holdouts. Linear TV makes this even more difficult because you cannot precisely control who sees the ad.

Geo-based incrementality is the most accessible approach for most brands using self-serve platforms. You select matched market pairs, run your campaign in one set of markets, hold back the other, and measure the difference in web traffic, branded search volume, or direct conversions. The matching process matters enormously. Markets need to be comparable on baseline conversion rates, seasonality patterns, competitive activity, and demographic profile. Get the matching wrong and your results are meaningless.

The other honest approach is media mix modelling (MMM). This uses historical spend and outcome data across all channels to estimate the contribution of each. It is not perfect, it requires significant data volume to be reliable, and it tells you what happened in the past rather than predicting the future with precision. But it does not double-count conversions, it accounts for diminishing returns, and it gives you a view of TV’s contribution that is not filtered through a platform with a commercial interest in making its own numbers look good.

The same logic applies across channels that resist simple measurement. Measuring affiliate marketing incrementality involves similar holdout methodology, and the lessons transfer directly to TV measurement.

Platform Metrics Worth Tracking and Ones Worth Ignoring

Not all platform metrics are equally useless. Some are genuinely informative for optimisation, even if they cannot prove business impact on their own.

Worth tracking: completion rate, frequency distribution, reach against your defined audience, and cost per completed view. These tell you whether your creative is holding attention, whether you are reaching the right people at the right frequency, and whether your inventory mix is efficient. A completion rate of 85% on a 30-second spot is a meaningful signal. A completion rate of 40% suggests either poor creative, poor targeting, or both.

Worth treating with caution: view-through conversions, attributed revenue, and return on ad spend as reported by the platform. These numbers are not fabricated, but they are calculated in ways that systematically overstate TV’s contribution. Use them for internal optimisation, not for business cases.

Worth building yourself: branded search volume trends, direct traffic trends, and baseline conversion rate changes during and after campaign flights. These are proxy metrics, not proof, but they are independent of the platform’s reporting and therefore more credible. If your branded search volume lifts 20% in markets where you are running TV and stays flat in markets where you are not, that is a meaningful signal even without a clean holdout test.

Forrester has written thoughtfully about the questions marketers should be asking to improve marketing measurement, and the core argument, that measurement frameworks need to be designed around business decisions rather than channel metrics, applies directly to how you approach TV reporting.

The Creative Variable That Most Measurement Frameworks Ignore

One thing I noticed consistently when judging at the Effie Awards was how rarely measurement frameworks accounted for creative quality as a variable. Two campaigns running on identical CTV inventory, with identical targeting and identical spend, can produce wildly different results if one creative is significantly stronger than the other. Most platform reporting treats creative as a constant and optimises around delivery variables. This is backwards.

Creative testing on CTV is more accessible than it used to be. Self-serve platforms allow you to run multiple creative variants against the same audience and measure completion rates, frequency tolerance, and downstream behaviour by variant. This is not the same as measuring business impact, but it is a meaningful input. A creative that holds attention at high frequency without generating fatigue is doing something that a weaker creative cannot do, regardless of how efficiently it is served.

If you want to understand how to structure creative testing rigorously, Semrush’s guide to A/B testing in GA4 covers the experimental framework principles that apply across channels, not just web testing.

The broader point is that measurement frameworks for TV need to account for creative as a variable, not treat it as a given. A weak creative running at perfect frequency is not a measurement problem. It is a creative problem that no attribution model will surface.

How GA4 Fits Into a TV Measurement Stack

GA4 is not a TV measurement tool. It was not designed to be, and trying to force it into that role creates more confusion than clarity. But it does play a useful supporting role in a broader TV measurement framework.

The most practical use of GA4 in a TV measurement context is as a source of truth for downstream behaviour. When you are running geo-based incrementality tests, GA4 gives you market-level traffic data, conversion data, and session quality data that is independent of the TV platform’s reporting. When you are looking at branded search lift, GA4 shows you the organic search sessions that follow a TV flight. When you want to understand whether TV is changing the behaviour of users who subsequently visit your site, GA4’s user experience data gives you a starting point.

The limitations are real. GA4 cannot tell you that a user saw your TV ad. It cannot connect a household-level TV impression to an individual session. It cannot attribute a conversion to a TV exposure without a view-through attribution window, which introduces the double-counting problem described earlier. Understanding what data GA4 goals cannot track is important context before you build TV reporting around it.

What GA4 can do is give you a clean, platform-independent view of what happened on your site before, during, and after a TV campaign. That is genuinely valuable, as long as you treat it as one input rather than a complete measurement solution.

Moz has a useful overview of what to know about GA4 for those who are still getting to grips with how the platform works and where its reporting has changed from Universal Analytics.

Building a Measurement Framework That Does Not Lie to You

Early in my career, I had a manager who used to say that the most dangerous number in marketing is the one that confirms what you already believed. TV measurement is particularly vulnerable to this. If you want TV to work, the platform will give you numbers that suggest it is working. If you want to cut TV, the attribution gaps will give you cover to do so. Neither position is honest measurement.

A measurement framework for self-serve TV that is worth trusting has a few characteristics. It separates delivery metrics from outcome metrics and does not conflate the two. It uses at least one source of evidence that is independent of the platform’s own reporting. It accounts for the time lag between TV exposure and conversion, which can be weeks rather than days. It tests incrementality at least annually to calibrate the platform’s view-through attribution against observed reality. And it is honest about what it cannot measure, rather than filling gaps with optimistic assumptions.

This is not a counsel of perfection. I have never worked with a brand that had perfect TV measurement, and I have managed campaigns across dozens of categories. What separates the brands that use TV well from the ones that waste money on it is not measurement sophistication. It is measurement honesty. They know what they know, they know what they do not know, and they make decisions accordingly.

The same discipline applies to newer channels where measurement is still developing. Measuring the effectiveness of AI avatars in marketing presents similar challenges: platform-reported metrics that are useful for optimisation but insufficient as proof of business impact, and a need for independent validation before building a business case.

Forrester’s perspective on aligning sales and marketing measurement is a useful reminder that TV measurement does not exist in isolation. The outcomes TV is supposed to drive, brand awareness, purchase intent, long-term revenue, need to connect to the metrics that the broader business cares about, not just the metrics the platform reports.

There is also a discipline question around what you measure across channels generally. Measuring generative engine optimisation campaigns faces a version of the same problem: new channels, limited standardised measurement, and a temptation to rely on proxy metrics that are easier to collect than they are meaningful to interpret. The solution in both cases is the same: define what business outcome you are trying to move, measure that outcome independently of the channel’s own reporting, and treat platform metrics as inputs to optimisation rather than proof of impact.

If you are building out a broader analytics capability alongside your TV measurement work, the Marketing Analytics hub covers the frameworks, tools, and thinking that sit underneath channel-specific measurement, including how to connect channel data to business outcomes in a way that holds up to scrutiny.

Mailchimp’s overview of marketing metrics is a useful reference for understanding how to categorise metrics by function, a discipline that matters as much for TV as it does for email or any other channel.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What metrics should I use to measure a self-serve TV advertising campaign?
Separate delivery metrics from outcome metrics. Delivery metrics, including completion rate, reach, and frequency distribution, tell you whether your campaign is being served effectively. Outcome metrics, including branded search lift, direct traffic changes, and conversion rate shifts in exposed versus unexposed markets, tell you whether it is driving business results. Platform-reported view-through conversions are useful for optimisation but should not be used as standalone proof of impact.
Why does last-touch attribution undervalue TV advertising?
TV operates at the awareness and consideration stages of the purchase experience. Customers who see a TV ad typically convert later, through search, direct navigation, or social channels. Last-touch attribution credits the final touchpoint before conversion and misses TV’s contribution entirely. This leads brands to systematically underestimate TV’s value and overinvest in channels that capture demand rather than create it.
How does incrementality testing work for connected TV campaigns?
Incrementality testing for CTV typically uses geo-based holdouts. You select matched market pairs with similar baseline conversion rates and demographic profiles, run your campaign in one set of markets, and hold back the other. The difference in outcomes between exposed and unexposed markets, measured through GA4, branded search data, or direct sales, gives you an estimate of TV’s incremental contribution. Some platforms also support household-level holdouts, which produce cleaner results when available.
Can GA4 be used to measure the impact of TV advertising?
GA4 cannot directly attribute conversions to TV exposure, but it plays a useful supporting role. It provides platform-independent data on branded search sessions, direct traffic, and conversion rates that you can compare across markets or time periods aligned with your TV flights. This makes it a valuable tool for geo-based incrementality testing and for tracking proxy signals of TV impact, as long as you treat it as one input rather than a complete measurement solution.
What is the difference between view-through attribution and incrementality in TV measurement?
View-through attribution credits a TV impression for a conversion that happens within a defined window after exposure, regardless of whether the TV ad caused the conversion. Incrementality testing measures whether conversions in an exposed group exceed conversions in a comparable unexposed group, isolating the causal effect of the campaign. View-through attribution is useful for optimisation within a platform. Incrementality testing is the more reliable method for understanding whether TV is actually driving additional business outcomes.

Similar Posts