Ad Transparency: What Advertisers Are Still Getting Wrong

Ad transparency means advertisers can see where their money goes, how their ads are served, and whether reported performance reflects what actually happened. Most brands think they have this. Most do not.

The gap between what ad platforms report and what is commercially real has been a quiet problem in the industry for years. It is not fraud in the dramatic sense. It is something more mundane and more damaging: a set of structural incentives that make opacity the default, and a culture in which clients have been conditioned to accept polished dashboards as a substitute for genuine accountability.

If you are managing significant ad spend, or advising someone who is, this is worth reading carefully.

Key Takeaways

  • Ad platforms are structurally incentivised to report performance in the most favourable light possible. That is not conspiracy, it is business model.
  • Last-click and platform-native attribution models routinely overstate the contribution of lower-funnel channels, often crediting conversions that would have happened regardless.
  • Walled gardens limit independent verification by design. Accepting their measurement as gospel is a commercial risk, not just a technical one.
  • Transparency is not a vendor feature to be toggled on. It requires contractual clarity, independent measurement, and the organisational will to act on what you find.
  • Most advertisers are not being deceived by bad actors. They are being misled by systems they have not interrogated hard enough.

Why Ad Transparency Is a Commercial Problem, Not Just a Technical One

When I was running a performance agency and managing hundreds of millions in ad spend across multiple markets, the transparency conversation almost always started in the wrong place. Clients would ask about brand safety tools, ad verification vendors, viewability scores. All legitimate concerns. But the more material question, the one that actually affected commercial outcomes, was simpler: are we being shown an accurate picture of what this spend is doing?

The answer, more often than not, was no. Not because anyone was lying outright. But because the measurement frameworks being used were built by the platforms being measured, optimised for the metrics those platforms could most easily claim credit for, and presented in interfaces designed to inspire confidence rather than critical thinking.

This is the core transparency problem. It is not primarily about fraud detection or brand safety, though both matter. It is about whether the performance numbers on your screen bear a reliable relationship to the commercial outcomes in your business. And for most advertisers spending meaningfully in paid channels, the honest answer is: only partially.

If you are thinking about how this fits into a broader go-to-market approach, the Go-To-Market and Growth Strategy hub covers the wider commercial context in which these decisions sit. Measurement integrity is not a media-buying technicality. It shapes how growth budgets get allocated, and how confidently leadership can back marketing investment.

What Walled Gardens Actually Mean for Your Measurement

The term “walled garden” gets used a lot, but its practical implications for advertisers are often underexplored. When you run campaigns through Google, Meta, Amazon, or TikTok, you are operating inside ecosystems that control the data, the attribution logic, and the reporting interface. You see what they choose to show you, measured in the way they choose to measure it.

That is not inherently sinister. These platforms have genuine scale and genuine data. But it creates a structural conflict of interest that advertisers should not ignore. The platform that sells you the inventory is also the platform telling you how well that inventory performed. Independent verification is limited by design, because sharing the underlying data would reduce competitive advantage.

The result is that platform-reported ROAS numbers are almost always higher than what econometric modelling or incrementality testing reveals. Not by a rounding error. Often by a material margin. I have seen cases where a channel that looked like a strong performer in platform reporting turned out, under proper incrementality testing, to be contributing almost nothing that would not have happened anyway through organic or direct traffic.

This is not a new observation. The industry has known about it for years. But the pace at which brands have actually changed their measurement practices has been slow, partly because the alternative requires more work, and partly because the people responsible for the channel often have a career interest in the numbers staying high.

The Attribution Problem Is Older Than You Think

Earlier in my career, I was firmly in the lower-funnel camp. Performance metrics were clean, trackable, and easy to defend in a boardroom. I believed we were measuring what mattered. It took a few years of looking at the same clients’ sales data alongside their media data before I started questioning whether we were measuring what mattered, or just measuring what was measurable.

The shift came when I started thinking about the clothes shop analogy. Someone who walks into a shop and tries on a jacket is far more likely to buy it than someone browsing the window. But if you only measure the till transaction, you credit the cashier, not the display that stopped them in the first place. Last-click attribution does exactly this. It credits the final touchpoint, typically a branded search or a retargeting ad, and ignores everything that created the intent.

The practical consequence is that lower-funnel channels look disproportionately efficient, and upper-funnel investment looks hard to justify. Over time, budgets migrate toward the measurable and away from the valuable. Brands end up harvesting demand they never built, wondering why growth has stalled despite strong ROAS numbers.

This is one of the more expensive misunderstandings in modern marketing. The Forrester intelligent growth model identified years ago that sustainable commercial growth requires investment across the full customer experience, not just at the point of capture. The measurement frameworks most advertisers still use do not reflect this.

What Genuine Ad Transparency Actually Requires

Transparency in advertising is not a feature you switch on. It is a set of practices, contracts, and habits that have to be built deliberately. Here is what it actually looks like when done properly.

Contractual clarity on media buying

If you are working with an agency, the contract should specify whether media is being bought on a principal or agent basis. Principal-based buying means the agency buys inventory at wholesale and resells it to you at a marked-up rate. You may not know the margin. Agent-based buying means the agency acts on your behalf and the economics are disclosed. Both models exist. Neither is automatically wrong. But you should know which one you are in.

The ANA in the US has published extensively on this, following investigations into non-transparent practices across the agency industry. The findings were not comfortable reading for anyone on either side of the client-agency relationship. The short version: many advertisers were not getting the transparency they assumed they had, and some were not asking for it clearly enough.

Independent measurement alongside platform reporting

Platform dashboards should be treated as one input, not the definitive answer. Running incrementality tests, media mix modelling, or even simple holdout experiments alongside platform reporting gives you a second perspective. The gap between the two is informative. If your platform ROAS is consistently 4x and your econometric model suggests 1.8x, that gap is telling you something important about where credit is being misassigned.

This does not require a large analytics team. It requires the discipline to ask the question and the willingness to act on the answer, even if the answer is inconvenient.

Supply chain visibility in programmatic

If you are running programmatic display or video, you should know where your ads are appearing. Not at the category level, but specifically. Ads.txt and sellers.json were introduced to make this more tractable, but adoption and enforcement remain inconsistent. Low CPMs in programmatic are often low for a reason: the inventory is low quality, the audience is not what it appears to be, or the placement is one nobody would choose if they saw it.

I have sat in reviews where a client was running display at what looked like excellent efficiency, until we pulled the placement report and found a significant portion of impressions were appearing on made-for-advertising sites with no genuine audience. The CPM was low. The value was lower.

Honest conversation about viewability and attention

The industry standard for a “viewable” display impression is that at least 50% of the ad is on screen for at least one second. One second. For video, it is two seconds for at least 50% of the player. These thresholds were set as a floor, not a standard of quality. An ad that meets the MRC viewability definition has not necessarily been seen in any meaningful sense.

The attention measurement space is developing, with vendors now offering metrics that go beyond viewability toward actual engagement signals. It is early, and the methodologies vary, but the direction is right. Viewability as the primary quality metric for display is a low bar that the industry has been comfortable with for too long.

Where Brands Go Wrong When Demanding Transparency

There is a version of the transparency conversation that goes badly wrong, and I have seen it happen more than once. A brand decides it wants full transparency, brings in an audit, finds some uncomfortable numbers, and then uses those numbers to cut agency fees or renegotiate contracts without addressing the underlying measurement problems. The result is a cheaper arrangement that is equally opaque.

Transparency is not a cost-cutting exercise. It is a precondition for making better decisions. If the goal is to find ammunition for a procurement conversation, you will probably find it, but you will not end up with better marketing. You will end up with a more defensively structured agency relationship and the same measurement blind spots.

The brands that benefit most from transparency initiatives are the ones that go in with genuine curiosity. They want to know what is actually working. They are prepared to find that some things they believed in are not performing as well as reported. And they are willing to reallocate based on what they find, even when that means reducing spend in channels that looked good on paper.

That takes organisational courage. In large businesses especially, ad spend decisions are often politically loaded. A channel that has been running for three years with strong platform metrics has advocates internally. Questioning it feels like questioning those people. But the alternative is continuing to allocate budget based on numbers that do not reflect commercial reality.

The Platform Side of the Equation

It is worth being fair to the platforms here. Some of the transparency improvements over the last few years have been genuine. Google’s move toward more open auction mechanics, Meta’s investment in measurement tools like conversion API, and the broader industry push toward privacy-preserving measurement are real developments. They are also, in part, responses to regulatory pressure and advertiser demands, which is how markets are supposed to work.

But structural incentives do not change quickly. A platform whose revenue depends on advertisers believing their spend is working has a fundamental interest in reporting that confirms that belief. The tools they offer for measurement, however sophisticated, are built within that constraint. That does not mean the tools are useless. It means they should be used with appropriate scepticism and supplemented with independent verification.

The growth in retail media networks is worth watching in this context. As more ad spend moves into closed ecosystems with proprietary measurement, the transparency challenge intensifies. The BCG commercial transformation framework has long argued that sustainable growth requires rigorous go-to-market discipline, and that discipline has to include honest measurement. Retail media’s rapid growth is creating new versions of the same old opacity problem.

What Good Looks Like in Practice

When I was building out the performance practice at iProspect, one of the things we worked hard on was being the agency that told clients uncomfortable truths rather than the one that protected the numbers. That is easier said than done when the numbers are tied to your retainer. But it built better long-term relationships and, more importantly, it led to better outcomes because decisions were being made on more accurate information.

Good transparency practice looks like this in concrete terms:

  • You know exactly what you are paying for media and what margin your agency is taking, in whatever model you have agreed.
  • You have at least one measurement methodology that is independent of the platforms you are buying through.
  • You run regular incrementality tests on your largest spend channels, even when the results are inconvenient.
  • Your placement reports are reviewed, not just filed. Someone looks at where the ads actually appeared.
  • Your reporting cadence includes a conversation about what the numbers might be missing, not just what they show.

None of this is technically complex. All of it requires discipline and the willingness to ask questions that do not always have comfortable answers.

The Vidyard revenue pipeline research points to a broader issue that connects here: GTM teams consistently underestimate how much pipeline value is being missed because measurement frameworks are not capturing the full picture of buyer engagement. The same logic applies to paid media. What you cannot see, you cannot optimise.

The Honest Approximation Principle

One thing I push back on when this topic comes up is the idea that perfect measurement is the goal. It is not achievable, and chasing it leads to paralysis. What is achievable is honest approximation: a measurement framework that is directionally accurate, internally consistent, and transparent about its limitations.

The worst outcome is not imperfect measurement. It is false precision. A dashboard that shows ROAS to two decimal places, click-through rates by hour of day, and attribution breakdowns by channel, all of it reported with a confidence that the underlying methodology does not support. That kind of reporting creates the illusion of control while obscuring the actual commercial picture.

When I judged the Effie Awards, one of the things that distinguished genuinely effective campaigns from the ones that just looked good in a case study was the quality of the measurement thinking. The best entries were honest about what they could and could not prove. They triangulated across multiple data sources. They acknowledged the limitations of their methodology. That intellectual honesty was itself a signal of commercial rigour.

The brands that are winning on measurement are not the ones with the most sophisticated attribution models. They are the ones with the clearest thinking about what their numbers actually mean, and the discipline to make decisions based on honest approximation rather than flattering precision.

Tools that support growth hacking and rapid experimentation, like those covered in the Semrush growth hacking toolkit overview, are only as useful as the measurement framework sitting underneath them. Running experiments without reliable measurement is just spending money faster.

Transparency as a Competitive Advantage

There is a version of this argument that frames transparency purely as risk management: know where your money goes so you do not get taken advantage of. That framing is too defensive. Transparency, done properly, is a source of competitive advantage.

When you have a more accurate picture of what is driving commercial outcomes than your competitors do, you allocate better. You invest in channels and approaches that are genuinely working rather than ones that look good in platform reporting. You catch underperformance earlier. You build a more honest relationship between marketing spend and business results, which makes it easier to justify investment and harder for budget to be cut when conditions tighten.

The BCG long-tail pricing and go-to-market analysis makes a related point about commercial discipline: the businesses that understand their economics most clearly are the ones best positioned to make confident investment decisions. The same principle applies to media. Clarity about what is working is not just a hygiene matter. It is a strategic asset.

Brands that invest in measurement integrity tend to allocate more confidently toward brand-building, because they have the tools to demonstrate its contribution over time rather than relying on the faith-based argument that it matters. That shift, from defensive spend to confident investment, is one of the more meaningful outcomes of getting transparency right.

If you are working through how measurement integrity connects to broader commercial planning, the Go-To-Market and Growth Strategy hub covers the strategic frameworks that sit around these decisions, from budget allocation to channel mix to how growth targets should actually be set.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What does ad transparency mean for advertisers?
Ad transparency means having clear, verifiable visibility into where your ad spend goes, how your ads are served, and whether reported performance metrics reflect real commercial outcomes. It covers media buying economics, placement quality, attribution methodology, and the independence of measurement. Most advertisers have partial transparency at best, because platform-reported data is not independently verified and agency contracts are not always explicit about how media is bought and marked up.
Why do platform-reported ROAS numbers often differ from econometric models?
Platforms measure performance using attribution models that assign credit to the touchpoints they can track, typically within their own ecosystem. This leads to over-crediting, particularly for lower-funnel activity like retargeting and branded search, where the conversion would often have happened regardless of the ad. Econometric modelling and incrementality testing measure the causal contribution of spend, which is a stricter and more commercially relevant standard. The gap between the two is usually material and worth understanding before making budget decisions.
What is the difference between principal and agent media buying?
In agent-based buying, an agency acts on behalf of the advertiser, purchasing media at the market rate and disclosing all costs. In principal-based buying, the agency purchases inventory at wholesale and resells it to the advertiser at a higher rate, keeping the margin. The principal model is not inherently wrong, but advertisers should know which model they are operating under. Many do not, because contracts are not always explicit, and agencies do not always volunteer the information unprompted.
How can advertisers verify where their programmatic ads are actually appearing?
Advertisers should request regular placement reports from their agency or DSP and review them, not just file them. Ads.txt and sellers.json provide some supply chain visibility, but enforcement is inconsistent. Third-party ad verification vendors can provide independent placement auditing. The practical step is to make placement transparency a contractual requirement and to treat suspiciously low CPMs as a signal worth investigating rather than a buying efficiency.
Is perfect measurement in advertising achievable?
No, and pursuing it as a goal tends to lead to either paralysis or false precision. The realistic target is honest approximation: a measurement framework that is directionally accurate, internally consistent, and transparent about what it cannot capture. The risk is not imperfect measurement but measurement that looks precise while obscuring the real commercial picture. Triangulating across multiple methodologies, including platform data, econometric modelling, and qualitative insight, produces better decisions than relying on any single source with high confidence.

Similar Posts