Self-Serving Attribution Is Costing You Better Decisions
Self-serving attribution happens when marketing channels report their own contribution to a conversion, without any independent mechanism to verify or challenge that claim. Every channel takes credit. The numbers add up to well over 100%. And the business makes investment decisions based on data that was never designed to be honest.
It is one of the most commercially damaging problems in modern marketing, and it is almost entirely invisible to the people it harms most.
Key Takeaways
- Every major ad platform has a structural incentive to overclaim credit for conversions. That incentive is baked into their default attribution settings.
- When channels report their own results, total attributed conversions routinely exceed actual conversions. The math doesn’t lie, but the dashboards do.
- Last-click attribution doesn’t solve the problem. It just shifts the overclaiming from one channel to another.
- The antidote is independent measurement: incrementality testing, media mix modelling, and controlled holdout experiments that no single platform controls.
- Most marketing teams are optimising toward metrics that flatter the status quo rather than metrics that reveal what is actually driving growth.
In This Article
- Why Attribution Became Self-Serving in the First Place
- What Does Over-Attribution Actually Look Like?
- The Last-Click Problem Is Not the Real Problem
- How Incrementality Testing Changes the Frame
- Media Mix Modelling as a Cross-Channel Check
- The Organisational Conditions That Keep Self-Serving Attribution in Place
- What Honest Attribution Practice Actually Looks Like
- The Commercial Cost of Getting This Wrong
Why Attribution Became Self-Serving in the First Place
When I was running agency teams and managing significant ad budgets across multiple channels, one of the first things I noticed was how differently each platform told the story of its own performance. Google said it drove the sale. Facebook said it drove the sale. The email platform said it drove the sale. Sometimes the affiliate network said it too. The client had made one sale. Four channels claimed it.
This is not a technical glitch. It is the natural consequence of letting channels grade their own homework. Every platform is commercially motivated to show the best possible return on ad spend. Their default attribution windows, their default lookback periods, their default conversion counting methods, all of them are configured to maximise the credit that channel appears to deserve. That is not a conspiracy. It is just incentive design working exactly as you would expect.
The problem compounds when you layer multiple platforms together. A customer sees a display ad on Monday. They click a paid search ad on Thursday. They open a retargeting email on Friday and convert. Depending on which platform you ask, each one of those touchpoints may claim full credit for that conversion. The display platform uses a 30-day view-through window. The search platform uses last-click. The email platform counts any conversion within 5 days of an open. Nobody is lying, exactly. But nobody is telling the truth either.
Attribution modelling became a growth discipline partly as a response to this problem. You can read more about how attribution fits into broader go-to-market and growth strategy thinking across the full hub.
What Does Over-Attribution Actually Look Like?
Over-attribution is easiest to spot when you pull total attributed conversions from each channel and add them up. If the sum is materially higher than your actual conversion count, you have a problem. In most multi-channel businesses, this gap is not small. It is routine to see attributed conversions running at two or three times actual conversions when you aggregate across platforms using their default settings.
The practical consequence is that you are almost certainly overpaying for channels that are claiming credit for demand they did not generate. Retargeting is the most common culprit. A customer has already decided to buy. They visit your site. A retargeting ad finds them. They convert. The retargeting platform claims credit for a conversion that was going to happen anyway. You pay for it. You scale it. You report back to the board that retargeting is delivering exceptional ROAS. None of that is true in any meaningful commercial sense.
Brand search has the same dynamic. Someone sees your TV ad, remembers your brand, searches for it by name, and clicks your paid brand search ad. The search platform reports a conversion. You pay a click cost for a customer who was already coming to you. Over time, if you scale brand search spend on the back of that attributed performance, you are spending money to capture demand you already created through other means.
I spent time judging the Effie Awards and what struck me most was not the campaigns that won. It was the measurement frameworks behind the ones that did not. The teams who understood the difference between attributed performance and incremental performance were operating at a different level entirely. Most were not.
The Last-Click Problem Is Not the Real Problem
The industry spent years debating last-click attribution as if it were the root cause of measurement failure. Last-click is a blunt instrument, no question. It ignores everything that happened before the final touchpoint and over-rewards channels that sit at the bottom of the funnel. But replacing last-click with multi-touch attribution does not solve self-serving attribution. It just makes the overclaiming more sophisticated.
Data-driven attribution models, the kind Google Analytics and Google Ads now default to, are trained on conversion path data from within the platform’s own ecosystem. They redistribute credit across touchpoints, but only across touchpoints the platform can see. They cannot account for offline interactions, for word-of-mouth, for the TV campaign running in the background, for the sales conversation that closed the deal. The model looks rigorous. The output is still self-serving.
This is worth sitting with for a moment. The shift from last-click to algorithmic attribution felt like progress. For many teams, it made the problem worse, because it made the overclaiming harder to challenge. You could argue with last-click. The logic was simple enough that a sceptical CFO could poke holes in it. Data-driven attribution comes wrapped in machine learning. Questioning it feels like questioning the algorithm.
Platforms like Crazy Egg have written about growth hacking frameworks that treat attribution as a core input to scaling decisions. The point stands regardless of which attribution model you use: if the input data is self-reported by the channel being measured, the output is compromised.
How Incrementality Testing Changes the Frame
Incrementality testing is the closest thing marketing has to a controlled experiment. You take a group of users who would have been exposed to your advertising and split them. One group sees the ads. The other group, the holdout, does not. You measure the difference in conversion rate between the two groups. That difference is the incremental lift your advertising actually drove. Everything else is noise.
The results, when teams run these tests honestly, are often uncomfortable. Retargeting campaigns that appeared to deliver 8x ROAS frequently show incrementality of 10 to 30 percent of that figure. Brand search campaigns that looked efficient often show close to zero incremental lift, because the customers would have converted regardless. The attributed numbers looked good. The incremental numbers told a different story.
I have been in rooms where incrementality test results were presented to senior stakeholders and the immediate reaction was to question the methodology rather than accept the finding. That is a very human response. Nobody wants to hear that a channel they have been scaling for two years is capturing demand rather than creating it. But that discomfort is exactly the point. Good measurement is supposed to be uncomfortable sometimes. If your measurement always confirms what you already believed, it is probably not measuring very much.
Semrush has documented growth hacking examples where teams scaled channels aggressively on the basis of attributed performance, only to find that overall revenue did not move proportionally. Incrementality testing is often what eventually reveals why.
Media Mix Modelling as a Cross-Channel Check
Media mix modelling (MMM) takes a different approach. Rather than tracking individual user journeys, it looks at aggregate spend and aggregate outcomes over time and uses statistical regression to estimate the contribution of each channel to overall business results. Because it operates at the macro level and uses your actual revenue or sales data as the dependent variable, it is not subject to the same platform-level overclaiming that corrupts channel-level attribution.
MMM has limitations. It requires significant historical data to produce reliable outputs. It does not capture short-term fluctuations well. And it cannot tell you what is happening at the individual customer level. But as a cross-channel sanity check, it is one of the most honest tools available. When your MMM output says paid social is contributing 8 percent of revenue and your Facebook Ads Manager says it is contributing 40 percent, you have a conversation worth having.
The organisations that use MMM well tend to treat it as a strategic input rather than an operational one. They run it quarterly or annually, use it to set budget allocation at the channel mix level, and then use platform-level data for tactical optimisation within those channels. That combination, strategic allocation from independent modelling and tactical optimisation from platform data, is more strong than relying on either alone.
BCG’s work on go-to-market strategy consistently emphasises the importance of independent measurement frameworks in high-stakes investment decisions. The same principle applies to media investment, even if the stakes feel smaller on a day-to-day basis.
The Organisational Conditions That Keep Self-Serving Attribution in Place
Self-serving attribution persists not just because of platform incentives, but because of organisational incentives. Channel managers are measured on channel performance. If the Google Ads manager is accountable for Google Ads ROAS, they have no structural reason to question whether Google Ads is claiming credit it does not deserve. If the Facebook team is measured on Facebook’s reported conversions, they will optimise for Facebook’s reported conversions. The measurement system shapes the behaviour.
When I grew an agency from around 20 people to over 100, one of the harder structural problems was exactly this. Teams organised by channel naturally optimise for channel metrics. You end up with internal competition between disciplines, each one defending its attributed numbers, rather than a shared focus on what is actually driving business outcomes for the client. The solution was not to eliminate channel expertise. It was to create a layer of measurement accountability that sat above the channels and was not owned by any of them.
That is harder than it sounds. It requires someone with enough authority to challenge channel leads when their numbers look too good. It requires a measurement framework that everyone agrees on before the campaigns run, not after the results come in. And it requires a culture where bringing bad news about a channel’s real performance is treated as valuable rather than threatening.
BCG’s research on scaling agile organisations points to cross-functional accountability as a prerequisite for effective decision-making at scale. The measurement problem in marketing is partly a structural problem, and structural problems require structural solutions.
If you are thinking about how attribution fits into a broader commercial measurement framework, the go-to-market and growth strategy hub covers the wider context around how measurement, channel strategy, and business outcomes connect.
What Honest Attribution Practice Actually Looks Like
Honest attribution does not mean perfect attribution. Perfect attribution is not achievable in a world where customers interact with brands across dozens of touchpoints, many of them unmeasurable. What honest attribution means is building a measurement framework that is designed to approximate truth rather than confirm existing beliefs.
In practice, that means a few things. First, never use a single attribution model as your sole source of truth. Triangulate. Compare your platform-reported numbers against your actual revenue. Run incrementality tests on your highest-spend channels at least once a year. If you have the data volume, run MMM as a strategic overlay. Use each tool for what it is good at and be explicit about where each one falls short.
Second, set your measurement rules before the campaign runs, not after. If you decide post-campaign that you will use a 7-day click window instead of a 30-day window because the 30-day numbers look worse, you are not doing measurement. You are doing post-rationalisation. The window, the conversion event, the holdout methodology, all of it should be agreed in advance.
Third, be honest with your stakeholders about what you do not know. One of the habits I developed over years of client-facing work was being explicit about measurement confidence levels. “We know this number is directionally right but probably overstated by 20 to 40 percent because of cross-channel overlap” is a more useful thing to say than presenting a clean ROAS number that everyone treats as fact. Stakeholders who understand the limitations of the data make better decisions than stakeholders who believe the data is more reliable than it is.
Hotjar’s work on growth loops highlights how feedback quality determines the quality of the decisions that follow. Attribution data is a feedback mechanism. If the feedback is systematically biased, the decisions built on it will be too.
The Commercial Cost of Getting This Wrong
The cost of self-serving attribution is not abstract. It shows up in budget allocation decisions that favour channels which are good at claiming credit over channels that are good at driving growth. It shows up in campaigns scaled beyond their true incremental value, burning budget on conversions that would have happened anyway. And it shows up in the gradual erosion of marketing’s credibility with the CFO, who eventually notices that marketing spend keeps going up while business results stay flat.
I have seen this play out in client businesses where paid retargeting had been scaled to represent 30 percent or more of total digital spend on the basis of attributed ROAS numbers that looked exceptional. When incrementality testing was introduced, the actual incremental contribution was a fraction of what the platform reported. Cutting retargeting spend back to a more appropriate level freed up significant budget for upper-funnel activity that was actually building demand. Revenue grew. The attributed ROAS from the remaining retargeting spend looked worse. That is the paradox: doing the right thing for the business can make your channel metrics look worse in the short term.
That is why the measurement conversation is in the end a commercial conversation, not a technical one. The question is not which attribution model is theoretically most accurate. The question is whether your measurement framework is helping you make better investment decisions or worse ones. Self-serving attribution, left unchallenged, reliably produces worse ones.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
