Cross-Platform Media Measurement: Agencies That Get It Right
The best agencies for cross-platform media measurement are those that treat measurement as a strategic discipline, not a reporting function. They combine rigorous methodology with commercial honesty, and they resist the temptation to present correlation as proof of causation. That combination is rarer than it should be.
What separates a genuinely capable measurement partner from one that produces impressive-looking dashboards is whether they can tell you what they do not know, not just what they can attribute. That intellectual honesty is the first thing I look for, and it is usually the first thing missing.
Key Takeaways
- Cross-platform measurement is a methodological problem before it is a technology problem. The agency that sells you software first is usually solving the wrong thing.
- No single attribution model tells the complete truth. The strongest agencies use a combination of MMM, incrementality testing, and platform data, and they are explicit about the limitations of each.
- Correlation between media spend and revenue is not proof that media caused the revenue. The best agencies know the difference and structure their measurement to test it.
- Walled gardens, cookie deprecation, and signal loss have made cross-platform measurement harder. Agencies that pretend otherwise are selling you false precision.
- The right measurement partner will challenge your existing attribution assumptions, not just plug into them and produce a cleaner version of what you already believe.
In This Article
- Why Cross-Platform Measurement Is Still Broken
- What a Capable Measurement Agency Actually Does
- Agencies With Genuine Measurement Capability
- Analytic Partners
- Nielsen
- Ekimetrics
- Gain Theory (WPP)
- Ipsos MMA
- What to Look for When Evaluating an Agency
- The Baseline Problem Nobody Talks About
- How Measurement Connects to Conversion Performance
- The Honest Limitations of Every Measurement Approach
- What to Expect from a Measurement Engagement
Why Cross-Platform Measurement Is Still Broken
I spent several years judging major marketing effectiveness awards, including the Effies. What struck me most was not the quality of the work, it was the quality of the evidence. A significant number of entries presented correlation as causation with complete confidence. Sales went up. Media spend went up. Therefore: media caused the sales. Judges who were not paying close attention would nod along. Some did not spot it at all.
That same problem is endemic in cross-platform measurement. Brands run campaigns across paid search, paid social, connected TV, display, and out-of-home. Revenue moves. Someone in the room points at the attribution dashboard and says the campaign worked. But attribution dashboards do not measure causation. They measure credit allocation, and the rules governing that allocation are almost always set by the people with the most to gain from looking good.
The walled gardens have made this worse. Meta reports Meta’s contribution. Google reports Google’s contribution. Neither has any incentive to tell you that the other platform did the heavy lifting, or that your organic search was doing most of the work before either of them got involved. Cross-platform measurement exists precisely to cut through this, but only if the methodology is honest and the agency running it is not also your media buyer with a vested interest in the outcome.
If you are working through the broader question of how measurement fits into your conversion strategy, the conversion optimization hub covers the full picture, from testing frameworks to analytics to copy performance.
What a Capable Measurement Agency Actually Does
Before listing agencies, it is worth being clear about what good looks like. Cross-platform media measurement is not a single tool or technique. It is a combination of methods, each with different strengths and different blind spots, used together to build a more honest picture of what is driving business outcomes.
Media mix modelling, or MMM, uses statistical regression to estimate the contribution of different media channels to a business outcome, typically revenue or sales volume. It works at aggregate level, does not require individual user tracking, and is therefore not affected by cookie deprecation or walled garden restrictions. Its weakness is that it is backward-looking, requires substantial historical data to be reliable, and can struggle to detect the contribution of newer or smaller channels.
Incrementality testing, usually through geo-based holdout experiments or matched market tests, is the closest thing measurement has to a controlled experiment. You run media in some markets and not others, and you measure the difference. Done well, this gets you closer to causation than any attribution model. Done badly, it produces results that are just as misleading, because the market pairs were not properly matched or the holdout period was too short.
Multi-touch attribution, or MTA, attempts to assign credit to individual touchpoints in a customer experience. It is the most granular of the three methods, and the most dependent on clean, connected data. With signal loss accelerating across browsers and devices, MTA is becoming less reliable as a standalone method, though it still has value when combined with the others.
The agencies worth working with use all three, triangulate between them, and are explicit about where each method is strong and where it is not. If an agency is selling you a single-platform attribution solution as a complete answer to cross-platform measurement, that is a red flag, not a feature.
Agencies With Genuine Measurement Capability
What follows is not a ranked list and it is not a comprehensive directory. It is a considered assessment of the types of agencies and specific players that have demonstrated serious, methodologically honest cross-platform measurement capability. The right choice depends on your scale, your category, and how much of your measurement you want to own internally versus outsource.
Analytic Partners
Analytic Partners is probably the most credible pure-play measurement consultancy operating at scale. Their work is grounded in MMM, and they have been doing it long enough to have built genuine expertise in cross-channel modelling across categories. What distinguishes them is their willingness to tell clients that a channel is not performing, even when that channel is paying someone else’s bills. That is not a universal trait in this industry.
They also publish research on measurement methodology that holds up to scrutiny, which is more than can be said for most vendor-produced content. If your primary need is understanding the true contribution of your media mix at a strategic level, they are a serious option.
Nielsen
Nielsen has been the measurement default for decades, and for good reason. Their cross-media measurement infrastructure is more developed than almost anyone else’s, particularly for brands running campaigns across broadcast, streaming, and digital simultaneously. The challenge with Nielsen is that their solutions are enterprise-grade in both capability and cost, and the outputs can take time to generate at a pace that allows for in-flight optimisation.
Their Marketing Mix Modeling offering has been rebuilt substantially in recent years, and the integration with their audience measurement data gives them a genuine advantage for brands where television, whether linear or connected, is a meaningful part of the media plan. For mid-market brands running primarily digital, the cost-benefit calculation may not stack up.
Ekimetrics
Ekimetrics is a data science consultancy with a strong measurement practice that has grown significantly in the UK and European markets. Their MMM work is technically rigorous and they have invested in making the outputs more accessible to marketing teams who are not data scientists. The interface between commercial rigour and practical usability is where a lot of measurement agencies fall down, and Ekimetrics handles it better than most.
They are also genuinely independent, which matters. When I was running agency operations, I saw repeatedly how measurement recommendations from agencies that also bought media were subtly shaped by the revenue implications of those recommendations. Ekimetrics does not have that conflict.
Gain Theory (WPP)
Gain Theory sits within WPP and brings the scale and data access that comes with that, alongside a measurement philosophy that is more intellectually honest than most agency-owned practices. Their focus on predictive modelling as well as retrospective measurement is useful for brands that want to use measurement to inform future planning rather than just explain the past.
The caveat with any agency-owned measurement practice is the potential conflict of interest when measurement outputs could influence media budget decisions that affect the parent group’s revenue. Gain Theory has a reasonable reputation for managing that tension, but it is worth being explicit about it in any commercial arrangement.
Ipsos MMA
Ipsos MMA, formerly Marketing Management Analytics before the Ipsos acquisition, has a long track record in marketing mix modelling particularly for FMCG and retail categories. Their strength is in categories where the relationship between media spend and sales volume is well-established and where they have deep historical benchmarks. For brands in those categories, that context is genuinely valuable, not just methodologically but commercially.
What to Look for When Evaluating an Agency
The agency selection process for measurement is where most brands go wrong. They evaluate on outputs, on dashboards and visualisations and case studies, when they should be evaluating on methodology and intellectual honesty.
Ask them how they handle incrementality. If they cannot explain their approach to incrementality testing clearly, that tells you something. Ask them what happens when their model shows that a channel you are heavily invested in is not performing. Watch how they respond. Ask them how they account for signal loss in their MTA methodology and whether they are adjusting for the degradation of third-party cookie data. If they say it is not a problem, they are either not paying attention or they are telling you what you want to hear.
I have sat in enough vendor presentations to know that the most impressive-looking solutions are not always the most honest ones. A few years ago I was shown a platform that claimed to have solved cross-platform attribution through a proprietary identity graph. The demo was genuinely impressive. When I pushed on the methodology, it turned out the identity matching was based on probabilistic modelling with a confidence interval so wide it rendered the channel-level outputs essentially meaningless. The dashboard looked authoritative. The underlying data did not justify it.
The question of how measurement connects to your conversion performance is one worth examining carefully. Issues like CRO keyword cannibalization can distort your measurement signals in ways that are easy to miss if you are only looking at platform-level data, and the same applies to keyword cannibalisation more broadly when you are trying to understand organic versus paid contribution across markets.
The Baseline Problem Nobody Talks About
There is a version of cross-platform measurement success that is not really success at all. I have seen it presented at industry events and in agency pitches more times than I can count. A brand consolidates its measurement, gets cleaner data, and the numbers improve. The agency takes credit for a step-change in performance. But what actually happened is that the previous measurement was so broken that almost anything would have looked like an improvement by comparison.
This is the baseline problem. When I hear about 90% improvements in cost per acquisition following a measurement overhaul, my first question is always about the baseline. Was the previous measurement methodology genuinely measuring the right thing? Were the previous attribution models so over-crediting certain channels that the apparent CPA was fictionally low? Were they comparing against a period with different market conditions?
I am reminded of a conversation I had with a major technology vendor who was presenting an AI-driven personalisation solution. The headline numbers were striking. CPA down by a substantial margin. Conversions up significantly. My response was simple: you took creative that was genuinely poor and replaced it with creative that was marginally less poor. The improvement was real, but it was not evidence of the technology’s power. It was evidence of how bad the starting point was. The same logic applies to measurement. Better measurement often reveals that previous performance was not as strong as the old numbers suggested. That is valuable information, but it is not a performance improvement.
Understanding where measurement connects to your broader conversion work matters here. If you are working with a conversion optimization consulting partner, make sure their measurement assumptions and your media measurement methodology are aligned. Disconnected measurement frameworks produce disconnected conclusions.
How Measurement Connects to Conversion Performance
Cross-platform media measurement does not exist in isolation from conversion performance. The two are connected in ways that are easy to overlook when measurement and CRO are handled by different teams or different agencies.
Consider cart abandonment. If your media measurement shows strong performance from a particular channel but your conversion data shows high abandonment rates from traffic originating there, the channel may be attracting the wrong audience, or your landing experience may not be matching the promise made in the ad. Neither of these is a media measurement problem in isolation, but both affect the conclusions you draw from media measurement data. Work on dynamic discount strategies for cart recovery will change your conversion rate, which will in turn change what your media measurement model tells you about channel efficiency.
Similarly, copy optimization on landing pages affects conversion rates in ways that can be misattributed to media performance if the measurement is not granular enough. You improve the copy, conversion rate goes up, and the media channel gets the credit in your attribution model. That is not a reason to stop improving copy. It is a reason to make sure your measurement methodology can distinguish between media contribution and on-site performance improvement.
If you are running experiments to understand these dynamics across different markets, the question of methodology matters as much as the tools. A/B testing frameworks for localization are particularly relevant here, because market-level differences in consumer behaviour can make cross-platform measurement conclusions from one geography misleading when applied to another.
The conversion funnel is not a single measurement problem. It is a series of connected measurement problems, and the agencies that understand this are the ones worth working with.
The Honest Limitations of Every Measurement Approach
No measurement methodology is complete. MMM requires historical data volume that newer brands or new channel entrants may not have. Incrementality testing requires the ability to hold out markets or audiences, which is not always commercially or operationally feasible. MTA is increasingly compromised by signal loss. Platform-reported data is structurally biased toward over-claiming credit.
The honest answer is that cross-platform measurement gives you a better approximation of truth than any single method or any single platform’s reporting. It does not give you certainty. The agencies that tell you otherwise are selling you something. The agencies that are explicit about uncertainty while still helping you make better decisions from imperfect data are the ones doing serious work.
There is a useful parallel in how common CRO misconceptions persist in the industry. The same pattern applies to measurement: confident-sounding claims that do not hold up under scrutiny, because the people making them are either not rigorous enough or are commercially incentivised to project certainty. Honest approximation, clearly communicated, is more useful than false precision.
Ecommerce measurement is a particularly complex version of this problem. The intersection of ecommerce and conversion rate optimization involves multiple touchpoints, multiple devices, and multiple sessions before a purchase, which is exactly the environment where single-platform attribution breaks down most severely.
For brands running split tests to validate measurement assumptions, landing page split testing methodology matters as much as the media measurement layer. If your test design is flawed, your measurement conclusions will be too, regardless of how sophisticated your cross-platform model is.
Measurement and conversion optimization are two sides of the same commercial problem. The conversion optimization resources on this site cover the full range of tools and approaches that sit alongside media measurement, from testing methodology to analytics to on-site performance.
What to Expect from a Measurement Engagement
A credible measurement agency will start with a data audit before they propose a solution. They will want to understand what data you have, how clean it is, how it is connected across platforms, and where the gaps are. If an agency skips this step and goes straight to proposing their platform or methodology, that is a sign they are selling a solution rather than solving your problem.
The onboarding period for MMM is typically several weeks to a few months, depending on the complexity of your media mix and the quality of your historical data. Incrementality tests need to run for long enough to be statistically meaningful, which usually means weeks rather than days. Anyone promising fast answers to cross-platform measurement questions is either working with an unusually clean data environment or compressing the timeline in ways that will compromise the outputs.
The output of a measurement engagement should be actionable recommendations, not just a model. The best agencies translate measurement findings into budget allocation decisions, channel mix changes, and creative investment priorities. The model is the means, not the end. I have seen too many measurement projects produce technically impressive outputs that sat in a deck and changed nothing about how the business allocated its media investment. That is a failure, even if the methodology was sound.
Page performance is one area where measurement findings frequently point to action. If your cross-platform model shows that certain channels are driving traffic that converts poorly, the question is whether that is a targeting problem or an on-site experience problem. Understanding how page speed affects conversion rates is relevant here, because slow load times can make media performance look worse than it is by introducing friction that has nothing to do with the quality of the media or the audience.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
