Marketing Attribution Models: Where the Numbers Lie
Marketing attribution models are broken in ways that most teams refuse to acknowledge. They assign credit to channels, campaigns, and touchpoints based on rules that were designed for a simpler, more trackable world, and the result is a distorted picture of what is actually driving revenue. In 2025, with third-party cookies largely gone, consent requirements tightening, and customer journeys spanning more touchpoints than ever, the gap between what attribution models report and what is actually happening has never been wider.
Key Takeaways
- Most attribution models assign credit based on convenience, not causality. What a model can measure is not the same as what drove a conversion.
- Last-click and first-click models are still in widespread use despite being structurally incapable of reflecting how modern buyers actually behave.
- Data-driven attribution sounds more rigorous than it is. It still operates within the boundaries of what your tracking can see, and those boundaries are shrinking.
- Cross-device journeys, dark social, and consent-related data gaps mean a significant portion of the customer experience is invisible to most attribution systems.
- The goal should not be a perfect attribution model. It should be a measurement framework honest enough to make better decisions with incomplete information.
In This Article
- Why Attribution Has Always Been a Compromise
- The Last-Click Problem Has Not Gone Away
- Data-Driven Attribution Is Not as Rigorous as It Sounds
- The Data Environment Has Degraded Faster Than the Models Have Adapted
- Dark Social and the Channels Attribution Cannot See
- Multi-Touch Attribution Solves One Problem and Creates Another
- How View-Through Attribution Distorts Channel Performance
- The Organisational Problem Nobody Talks About
- What Good Measurement Actually Looks Like in 2025
- The Practical Steps Worth Taking Now
Why Attribution Has Always Been a Compromise
When I was running an agency and managing substantial media budgets across multiple clients, attribution was the question that came up in almost every quarterly review. Clients wanted to know which channels were working. Fair enough. The problem was that the answer we gave them was always shaped by what we could measure, not by what was actually true.
That is not a criticism of the people involved. It is a structural problem with attribution itself. Every model makes assumptions. Last-click assumes the final touchpoint did all the work. First-click assumes the first interaction was the one that mattered. Linear models assume every touchpoint contributed equally. Time-decay models assume recency equals importance. None of these assumptions are grounded in how real buyers actually behave. They are simplifications designed to make the data manageable.
The honest version of attribution has always been: we are giving credit to what we can see, weighted by a set of rules we chose, and calling it insight. That was true ten years ago. It is more true now, because the data environment has deteriorated significantly while the models have not fundamentally changed.
If you want broader context on how measurement frameworks are evolving, the Marketing Analytics hub at The Marketing Juice covers attribution, GA4, and the practical tools that sit behind modern performance measurement.
The Last-Click Problem Has Not Gone Away
Last-click attribution should have been retired years ago. It gives 100% of the conversion credit to the final touchpoint before a purchase, which means paid search and direct traffic consistently look like heroes while everything that built awareness, consideration, and intent gets nothing.
I have sat in enough post-campaign reviews to know what happens when last-click is the default. The team running brand campaigns cannot prove their value. The team running upper-funnel display gets their budget cut. Paid search gets more money because the numbers look good, and six months later the brand is wondering why their cost per acquisition is climbing even though they kept increasing search spend. What they are actually seeing is the compounding effect of starving the channels that were feeding search intent in the first place.
Google officially deprecated last-click as the default in GA4 in favour of data-driven attribution. But last-click is still available, still used by many teams, and still the implicit logic behind how many platforms report their own performance. Meta reports on conversions attributed to Meta. Google reports on conversions attributed to Google. Both are right, by their own logic, and both are overclaiming.
Data-Driven Attribution Is Not as Rigorous as It Sounds
Data-driven attribution (DDA) has become the default in GA4, and the pitch is compelling: instead of applying arbitrary rules, the model uses machine learning to assign credit based on which touchpoints actually influenced conversions. It sounds like a significant improvement. In some ways it is. In other ways, the limitations are just better hidden.
DDA still only works with data it can see. If a customer discovered your brand through a podcast, saw a Reddit post, had a conversation with a colleague, and then clicked a retargeting ad before converting, the DDA model sees the retargeting ad click and whatever else was tracked in your analytics setup. The podcast, the Reddit thread, and the word-of-mouth recommendation are invisible. The model assigns credit to what it has access to, and presents that as a data-driven conclusion.
Forrester has made this point clearly: black-box attribution models obscure their assumptions in ways that make them harder to interrogate, not easier. When a model tells you that email contributed 23.4% of conversion credit, the precision of that number can create false confidence. You are not seeing reality more clearly. You are seeing a modelled approximation presented with the aesthetics of certainty.
DDA also requires volume to function properly. Google recommends at least 400 conversions per month within a 30-day lookback window for the model to be statistically meaningful. Many businesses do not hit that threshold, which means they are running a machine learning model on insufficient data and trusting the output anyway.
The Data Environment Has Degraded Faster Than the Models Have Adapted
Attribution models were designed for a world where you could track users reliably across sessions and devices. That world is largely gone. The combination of third-party cookie deprecation, iOS privacy changes, GDPR and CCPA consent requirements, and the rise of ad blockers means that a meaningful portion of user behaviour is simply not being captured.
The scale of this varies by audience. If your customers skew younger, use Safari, and are privacy-conscious, your data gaps are larger than if your audience is using Chrome on desktop and accepting cookies without reading the banner. But across most B2C and B2B audiences, observable data is a fraction of actual behaviour.
Cross-device journeys compound this further. A customer might research a product on their phone, compare options on a tablet, and convert on a desktop. Without a logged-in identity graph, these look like three separate users in your analytics. The attribution model sees a direct conversion from a desktop session and assigns credit accordingly. The mobile research session that started the whole process gets nothing.
Forrester’s framework for improving marketing measurement points to this fragmentation as one of the core challenges: the question is not just which model to use, but whether the underlying data is reliable enough to model at all.
Dark Social and the Channels Attribution Cannot See
Dark social is the traffic that arrives with no referral data, typically because it came from a private channel. Someone shares a link in a WhatsApp group, a Slack workspace, or a private Discord server. The recipient clicks it. Your analytics records it as direct traffic. Your attribution model has no idea where it came from.
This is not a niche problem. For brands with any kind of community, word-of-mouth, or content marketing presence, dark social can represent a substantial share of engaged traffic. The people arriving via dark social are often high-intent, because someone they trust recommended the content or the product. But because they show up as direct traffic, they get attributed to whatever channel they converted through, or they inflate the direct traffic numbers in ways that make direct look more powerful than it is.
I have seen this play out in client accounts repeatedly. Direct traffic spikes that the team cannot explain, which get written off as branded search or returning visitors, are often dark social in disguise. The attribution model has no mechanism to surface this. It just absorbs the conversion and assigns credit somewhere plausible.
This is one reason why keeping your analytics framework simple and honest matters more than adding layers of complexity. More sophisticated models do not fix data gaps. They just model around them more confidently.
Multi-Touch Attribution Solves One Problem and Creates Another
Multi-touch attribution (MTA) was supposed to fix the shortcomings of single-touch models by distributing credit across the entire customer experience. And it does address some of the obvious problems with first-click and last-click. But it introduces its own distortions.
The first issue is the observable experience problem, which I mentioned earlier. MTA models can only distribute credit across touchpoints they can see. If half the experience is invisible due to privacy changes, cross-device gaps, or dark social, the model is distributing credit across an incomplete picture and presenting it as a complete one.
The second issue is correlation versus causation. MTA models observe which touchpoints appeared in journeys that ended in conversion. They do not tell you whether those touchpoints caused the conversion. A customer who was already going to buy might have clicked a retargeting ad on the way to the checkout page. The MTA model records that as a contributing touchpoint. The retargeting ad gets credit. But removing that ad might not have changed the outcome at all.
This is the distinction between attribution and incrementality, and it is a meaningful one. Attribution tells you what was present in the experience. Incrementality tells you what actually moved the needle. They are not the same thing, and confusing them leads to budget decisions that optimise for correlation rather than causation.
How View-Through Attribution Distorts Channel Performance
View-through attribution deserves its own section because it is one of the most aggressively overclaimed metrics in digital advertising. View-through conversion credit is given to an ad that was served to a user who later converted, even if they never clicked the ad. The logic is that the impression had an influence on the decision.
Sometimes that is true. Brand awareness campaigns do influence behaviour without generating clicks. The problem is that view-through windows are often set to 30 days or longer, which means an ad served to someone who was already in the market and converted through a completely different channel still gets partial credit. Platforms that rely heavily on view-through attribution, particularly display and video networks, have a commercial incentive to set long windows and claim credit generously.
When I was managing large display budgets, view-through attribution made the numbers look excellent. The CPAs were low, the ROAS was strong, and the channel looked indispensable. When we ran holdout tests to measure actual incrementality, the picture was quite different. A portion of those conversions would have happened anyway. The display spend was capturing credit for demand it had not created.
This is not an argument against display advertising. It is an argument for being clear-eyed about what view-through attribution actually measures, and not using it as the primary basis for channel investment decisions.
The Organisational Problem Nobody Talks About
Attribution models do not exist in a vacuum. They exist inside organisations where channel teams have budget targets, performance reviews, and a strong incentive to look good. This creates a structural bias that is rarely acknowledged openly.
When paid search teams control their own attribution settings, they tend to use models that favour paid search. When social teams report on social performance, they use platform-native attribution that credits social generously. When these numbers feed into a centralised dashboard, you end up with a picture where every channel looks like it is performing well, the total attributed revenue exceeds actual revenue by a wide margin, and nobody is quite sure what is actually working.
I have been in those conversations. The paid search team has their numbers. The social team has their numbers. The email team has their numbers. Add them up and you are attributing three times your actual revenue. Everyone is right by their own model, and the marketing director has to make budget decisions from a set of reports that are structurally incompatible with each other.
The question of whether dashboards are generating insight or just generating activity is one that marketing leaders need to ask more honestly. A dashboard that aggregates incompatible attribution models is not a measurement system. It is a political document.
What Good Measurement Actually Looks Like in 2025
The answer is not to find a better attribution model. There is no attribution model that solves the problems described above, because most of those problems are data problems, not modelling problems. Better models applied to incomplete data still produce incomplete answers.
What good measurement looks like in 2025 is a combination of approaches used together, with an honest acknowledgement of what each one can and cannot tell you.
Attribution models are useful for understanding the observable touchpoint mix and identifying obvious inefficiencies. They should not be used as the sole basis for major budget decisions. Media mix modelling (MMM) provides a top-down view of channel contribution using statistical regression across historical data, and it does not depend on individual user tracking, which makes it more resilient to the current data environment. Incrementality testing, where you deliberately withhold a channel or campaign from a control group and measure the difference in outcomes, is the closest thing to causal measurement available to most teams.
None of these approaches is perfect. MMM requires significant historical data and is slow to respond to changes. Incrementality testing is resource-intensive and difficult to run at scale. Attribution models are fast and accessible but structurally limited. Used together, with appropriate scepticism applied to each, they provide a more honest approximation of reality than any single model can.
Forrester’s guidance on the right questions to ask about marketing measurement reinforces this: the goal is not measurement perfection. It is measurement that is honest enough to make better decisions.
If you are working through how to structure your measurement approach in GA4 specifically, the Moz guide to GA4 custom event tracking is a useful technical reference for getting the data layer right before you worry about attribution models.
There is more on building measurement frameworks that hold up under scrutiny across the Marketing Analytics section of The Marketing Juice, including pieces on GA4 setup, dashboard design, and the practical limits of performance data.
The Practical Steps Worth Taking Now
If you are managing marketing measurement for a business and want to make it more honest without rebuilding everything from scratch, there are a few practical steps that make a meaningful difference.
First, audit your attribution settings across every platform and make sure you understand what each one is measuring and how. The settings that platforms default to are not neutral. They are designed to make the platform look good. Know what you have agreed to before you trust the numbers.
Second, look at your direct traffic volume and ask whether it is plausible. Unusually high direct traffic is often a sign of dark social, broken UTM parameters, or HTTPS to HTTP referral stripping. Understanding what is hiding in direct traffic gives you a more accurate picture of where engaged visitors are actually coming from. The Unbounce breakdown of content marketing metrics includes useful context on how traffic source data can mislead if you take it at face value.
Third, establish a consistent attribution model for internal reporting and stick to it. The model you choose matters less than the consistency. Switching attribution models mid-year makes trend analysis meaningless and creates the appearance of performance changes that are actually just modelling artefacts.
Fourth, run at least one incrementality test per quarter on a significant channel. It does not need to be sophisticated. A simple holdout test where a percentage of your audience is excluded from a campaign, and you measure whether their conversion rate differs from the exposed group, will tell you more about actual channel contribution than any attribution model can.
Fifth, be honest with stakeholders about what the numbers mean. Attribution data is a perspective on reality, not reality itself. If you present it as certainty, you will eventually make budget decisions that the data cannot actually support, and the business will pay for it.
The Mailchimp overview of what a marketing dashboard should actually contain is a reasonable starting point for thinking about how to present measurement data in a way that is useful rather than just impressive-looking.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
