Multichannel Attribution: Stop Optimising for the Model, Not the Customer
Multichannel attribution is the process of assigning credit for a conversion across the multiple touchpoints a customer interacted with before buying. Done well, it tells you which channels are genuinely contributing to revenue. Done badly, which is most of the time, it tells you a story that flatters your last click and quietly buries everything that did the actual work.
The problem is not that attribution is hard. The problem is that most teams treat their chosen model as ground truth rather than as one perspective on a complicated experience. That distinction matters enormously when you are making budget decisions worth hundreds of thousands of pounds.
Key Takeaways
- No attribution model is objectively correct. Each one encodes assumptions about customer behaviour that may or may not match your actual buyers.
- Last-click attribution systematically undervalues awareness and consideration channels, which means brands that rely on it tend to defund the channels that create demand.
- Data-driven attribution is more sophisticated but requires significant conversion volume to produce reliable outputs, and it still cannot account for offline influence.
- The goal is not a perfect model. It is an honest approximation that informs better budget decisions over time.
- Triangulating across multiple measurement approaches, including media mix modelling, incrementality testing, and platform data, produces more defensible conclusions than any single model alone.
In This Article
- Why Attribution Gets Treated as a Solved Problem When It Isn’t
- What the Main Attribution Models Actually Assume
- The Channel Bias Problem Nobody Talks About Enough
- What Incrementality Testing Adds to the Picture
- Media Mix Modelling: When You Need to See the Whole Picture
- GA4 and the Attribution Settings Most Teams Ignore
- Building an Attribution Framework That Actually Informs Decisions
- The Honest Conversation Attribution Makes Possible
Why Attribution Gets Treated as a Solved Problem When It Isn’t
Early in my career, when paid search was still relatively new and the measurement infrastructure was genuinely simple, it felt like we had cracked something. You ran a campaign, you tracked the click, you counted the conversion. The feedback loop was tight and the numbers were clean. I remember launching a paid search campaign for a music festival, watching six figures of revenue land within roughly 24 hours, and thinking: this is the most measurable thing in marketing.
It was. For that specific moment in time, with that specific customer behaviour. But the industry took that clarity and generalised it into something it was never designed to be: a universal framework for understanding how marketing works across all channels, all customer types, and all purchase cycles.
The result is that last-click attribution became the default, not because it was the most accurate, but because it was the easiest to implement and the easiest to defend in a meeting. If someone asked why you spent money on a channel, you could point to a conversion and say: that click drove that sale. Clean. Simple. Wrong.
If you want a broader grounding in how analytics tools should and should not be used to make marketing decisions, the Marketing Analytics hub on The Marketing Juice covers the full landscape, from GA4 fundamentals to measurement strategy.
What the Main Attribution Models Actually Assume
Every attribution model is a set of assumptions dressed up as a measurement system. Understanding what those assumptions are is the first step to using them intelligently.
Last-click attribution assumes the final touchpoint before conversion deserves all the credit. It is intuitive and easy to implement. It also systematically rewards channels that are good at closing, like branded paid search and direct, while ignoring everything that brought the customer into the funnel in the first place. If you run a brand awareness campaign on YouTube that drives 40,000 people to search for your brand, and then a branded search ad converts them, last-click gives 100% of the credit to the search ad. The YouTube campaign looks like it did nothing.
First-click attribution has the opposite problem. It rewards the channel that introduced the customer and ignores everything that nudged them toward a decision. Useful if you are specifically trying to understand what drives new customer acquisition, but misleading as a general budget allocation tool.
Linear attribution splits credit equally across all touchpoints. It sounds fair. In practice, it treats a display impression someone barely noticed as equivalent to a product page visit that lasted eight minutes. Equal credit is not the same as accurate credit.
Time-decay attribution gives more weight to touchpoints closer to the conversion. This is more defensible for short sales cycles but can undervalue early-stage channels in longer consideration journeys, exactly the kind of experience that characterises B2B buying or high-value consumer purchases.
Position-based attribution (sometimes called the U-shaped model) gives 40% credit to the first touchpoint, 40% to the last, and distributes the remaining 20% across the middle. This is a reasonable compromise if you believe both acquisition and conversion are important, but the 40/20/40 split is arbitrary. There is no empirical basis for those specific numbers.
Data-driven attribution uses machine learning to assign credit based on the actual conversion paths in your data. Google’s version, which is now the default in GA4, compares converting paths against non-converting paths to estimate each touchpoint’s marginal contribution. It is more sophisticated than rule-based models, but it requires substantial conversion volume to produce reliable results, and it is still constrained by what Google can observe within its own ecosystem. As Moz has noted in their analysis of GA4’s directional reporting, the data should be treated as directional rather than definitive.
The Channel Bias Problem Nobody Talks About Enough
When I was running agencies and reviewing client attribution reports, one pattern repeated itself across almost every account: the channels that were easiest to measure looked the most valuable, and the channels that were hardest to measure looked like they were underperforming.
This is not a coincidence. It is a structural bias built into how most attribution systems work. Paid search, particularly branded search, sits at the bottom of the funnel where intent is already high and conversion rates are strong. It captures demand that was often created elsewhere. But because it generates a clean, trackable click, it gets the credit.
Display advertising, social video, and organic content tend to operate higher in the funnel. They influence consideration and build brand familiarity. But they generate fewer direct clicks to conversion, so they look weak in last-click models. Teams defund them. Branded search volume drops. They refund the brand channels. Volume recovers. They conclude the brand channels were never needed. This cycle is expensive and it plays out more often than most CMOs would like to admit.
The HubSpot distinction between marketing analytics and web analytics is relevant here. Web analytics tells you what happened on your site. Marketing analytics is supposed to tell you why. Attribution models that only track on-site behaviour are answering the wrong question.
What Incrementality Testing Adds to the Picture
The most rigorous way to understand whether a channel is genuinely driving revenue is incrementality testing: running a controlled experiment where one group of users sees your advertising and a matched control group does not, then measuring the difference in conversion rates between the two groups.
This is harder to run than pulling a report from GA4, but it answers a fundamentally different and more important question. Attribution models tell you which channels were present in converting journeys. Incrementality testing tells you which channels actually caused conversions that would not have happened otherwise.
I have seen accounts where incrementality tests revealed that a significant portion of conversions attributed to a retargeting campaign would have happened anyway. The users were already highly intent. The retargeting ads were following people who had already decided to buy. The attribution model called it a success. The incrementality test called it waste.
Not every team has the budget or the traffic volume to run clean incrementality tests across every channel. But running them periodically on your highest-spend channels, particularly retargeting and branded search, is worth the effort. The results are often uncomfortable, which is exactly why they are valuable.
Media Mix Modelling: When You Need to See the Whole Picture
Media mix modelling (MMM) takes a different approach entirely. Rather than tracking individual user journeys, it uses statistical regression to model the relationship between marketing spend across channels and aggregate business outcomes like revenue or sales volume. It can incorporate offline channels, seasonality, pricing changes, and competitive activity in ways that user-level attribution cannot.
MMM fell out of fashion during the rise of digital attribution because it felt slow and expensive compared to the real-time dashboards that digital platforms offered. It is coming back, partly because of the collapse of third-party cookies and the growing unreliability of user-level tracking, and partly because the industry has started to remember that most marketing does not happen in a browser.
The limitation of MMM is that it operates at an aggregate level. It tells you that TV drove X% of revenue in Q3. It cannot tell you which specific customers were influenced by which specific TV spots. For strategic budget allocation across channels, it is often more reliable than digital attribution. For campaign-level optimisation, you still need more granular data.
The practical answer for most teams is to use both. MMM for strategic channel allocation decisions, digital attribution for tactical campaign management, and incrementality testing to periodically validate both. None of these tools is complete on its own. Used together, they produce something closer to an honest picture.
GA4 and the Attribution Settings Most Teams Ignore
GA4 defaults to data-driven attribution for conversion reporting, but it also allows you to switch between models in the Attribution settings and compare them in the Model Comparison report. Most teams set up GA4, accept the defaults, and never look at this again.
That is a missed opportunity. Comparing last-click against data-driven attribution across your key conversion events will often reveal significant differences in how credit is distributed. Channels that look weak under last-click frequently look more valuable under data-driven, and vice versa. Those differences are worth understanding before you make budget decisions.
There are also important limitations in how GA4 handles attribution that are worth being aware of. The default lookback window is 30 days for non-direct channels and 90 days for direct. If your purchase cycle is longer than that, you are missing touchpoints. GA4 also cannot track across devices unless users are logged in, and it cannot see anything that happens outside Google’s ecosystem. As the Moz overview of GA4’s key characteristics makes clear, understanding what the tool does not measure is as important as understanding what it does.
For teams running significant paid social spend, the discrepancy between GA4 attribution and platform-reported conversions is a persistent headache. Meta’s attribution window defaults are different from Google’s. Both platforms have an incentive to claim credit for conversions. Neither is giving you an unbiased view. Cross-referencing platform data with GA4 and, where possible, with your CRM or order management system, gives you a more grounded picture than trusting any single source.
Building an Attribution Framework That Actually Informs Decisions
When I was growing an agency from around 20 people to over 100, one of the things I learned about measurement is that the question matters more than the tool. Teams that got into trouble were the ones that pulled whatever report was easiest to generate and then made decisions based on it. Teams that got it right started with the business question and then worked backwards to find the data that could answer it.
For attribution, the relevant business questions are usually: which channels are genuinely driving new customers, which channels are supporting conversion for customers who would have bought anyway, and where should we put incremental budget to grow revenue rather than just measure it differently.
A practical framework for most teams looks something like this. Use data-driven attribution in GA4 as your primary digital measurement tool, but treat it as directional rather than definitive. Supplement it with platform-level data, but apply a healthy scepticism to any platform’s self-reported numbers. Run incrementality tests on your highest-spend channels at least once a year. If your business has significant offline spend or a complex channel mix, invest in even a simplified version of media mix modelling to validate your digital attribution findings.
Document the assumptions your model makes and make sure the people using the data understand them. An attribution model is only dangerous when it is treated as objective truth. When it is understood as a structured approximation, it becomes a useful tool for making better decisions over time.
Understanding attribution properly also means understanding how it connects to the broader analytics stack. If you are building or reviewing your measurement infrastructure, the articles across the Marketing Analytics and GA4 hub cover everything from GA4 setup to reporting frameworks that hold up under scrutiny.
The Honest Conversation Attribution Makes Possible
One of the more useful things a rigorous attribution conversation does is force honesty about what different channels are actually for. Not every channel should be measured against last-click conversion rates. A YouTube campaign designed to build brand familiarity should be measured against brand search uplift, aided awareness, and long-term revenue contribution, not against direct conversions in a 30-day window.
When I was judging the Effie Awards, the entries that stood out were the ones that had thought carefully about what success looked like for each element of their campaign and had built measurement frameworks that matched. The entries that struggled were the ones that had applied a single conversion metric to every channel and then been surprised when brand-building activity looked weak against a direct response benchmark.
Attribution is not just a technical problem. It is a strategic one. How you measure channels shapes how you value them, and how you value them shapes how you invest in them. Teams that measure everything against last-click conversion tend to over-invest in the bottom of the funnel and starve the top. Over time, that produces a business that is very good at converting existing demand and very poor at creating new demand. The numbers look fine until the pipeline dries up.
The goal is not to find a model that makes your current spend look justified. It is to find a measurement approach honest enough to tell you when you are wrong. That is a harder standard to meet, and it requires more intellectual honesty from everyone involved, but it is the only version of attribution that is actually worth doing.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
