Marketing Attribution: Stop Optimising for the Last Click
Marketing attribution is the process of assigning credit to the channels, campaigns, and touchpoints that contributed to a conversion. In practice, it is one of the most contested, imperfect, and commercially important problems in marketing. No model gets it completely right. The question is which imperfection you can live with, and which decisions you are willing to make on the back of it.
Most marketers are optimising for the wrong thing. They are chasing attribution accuracy when they should be chasing attribution usefulness. Those are not the same goal, and confusing them is expensive.
Key Takeaways
- No attribution model is accurate. Every model is a simplification. The goal is a useful simplification, not a perfect one.
- Last-click attribution systematically undervalues awareness and mid-funnel channels, which distorts budget allocation over time.
- Data-driven attribution sounds more objective than it is. It still depends on the data you feed it, and GA4’s version has real blind spots.
- The channels that look worst in attribution reports are often the ones doing the most work. Paid social and display are routinely undercredited.
- Attribution should inform budget decisions, not make them. Human judgement, business context, and incrementality thinking still matter.
In This Article
- Why Attribution Is a Political Problem as Much as a Technical One
- What the Common Attribution Models Actually Assume
- The Last-Click Problem Is Worse Than Most People Realise
- Data-Driven Attribution: More Objective, or Just More Opaque?
- Incrementality: The Closest Thing to a Real Answer
- Media Mix Modelling: Useful Context, Not a Silver Bullet
- How to Choose an Attribution Model That Fits Your Business
- The Channels That Attribution Gets Wrong Most Often
- What Good Attribution Practice Actually Looks Like
Why Attribution Is a Political Problem as Much as a Technical One
I spent years managing large media budgets across multiple channels simultaneously, and I can tell you that attribution conversations are rarely just about data. They are about whose channel gets the credit, whose team looks good in the board report, and whose budget survives the next planning cycle. The technical question of which model to use is almost always secondary to the organisational question of who benefits from the answer.
When I was running an agency and managing performance marketing across a range of clients, we would regularly see paid search teams citing last-click data to justify their budgets, while the social and display teams would argue, correctly, that they were doing work that never showed up in the conversion report. Both sides were telling the truth. The model was just picking a winner arbitrarily.
This is worth naming clearly: attribution models do not reveal the truth. They produce a version of the truth that is shaped by the model’s assumptions. Last-click assumes the final touchpoint deserves all the credit. First-click assumes the first one does. Linear assumes every touchpoint contributed equally. None of these assumptions are empirically defensible. They are conventions. Useful ones, sometimes, but conventions nonetheless.
If you want a broader grounding in how analytics thinking should shape marketing decisions, the Marketing Analytics hub covers the full landscape, from measurement frameworks to GA4 implementation.
What the Common Attribution Models Actually Assume
Before choosing a model, you need to understand what each one is actually saying about customer behaviour. Most marketers skip this step and just use whatever their platform defaults to, which is usually last-click.
Last-click attribution gives 100% of the credit to the final touchpoint before conversion. It is simple, easy to implement, and deeply misleading for any brand running multi-channel campaigns. It systematically rewards the channels that appear at the bottom of the funnel, typically branded paid search, and punishes everything that built awareness or consideration earlier in the experience. If you have been running last-click for years, your budget allocation is probably skewed in ways you have not noticed.
First-click attribution is the mirror image. It credits the channel that first introduced the customer to the brand. This is useful if you are specifically trying to understand which channels drive new customer acquisition, but it ignores everything that happened between discovery and purchase.
Linear attribution spreads credit equally across all touchpoints. It is more democratic than last-click, but it treats a display impression and a product page visit as equally valuable, which is rarely true.
Time-decay attribution gives more credit to touchpoints closer to the conversion. This makes intuitive sense for short sales cycles, but it penalises awareness channels by design, which can lead to the same underfunding problem as last-click over time.
Position-based attribution, sometimes called U-shaped, splits the majority of credit between the first and last touchpoints and distributes the remainder across the middle. It is a reasonable compromise for brands that care about both acquisition and conversion, but it is still a convention, not a measurement.
Data-driven attribution uses machine learning to distribute credit based on observed conversion patterns. GA4 now defaults to this model, and it sounds more rigorous than the rule-based alternatives. It is, in some ways. But it is only as good as the data it is trained on, and it has a minimum data threshold that makes it unreliable for lower-traffic accounts. It also cannot account for channels that are not tracked in GA4, which is a significant limitation for anyone running offline, TV, or out-of-home activity. Moz has a useful breakdown of how GA4 handles conversion tracking and where the data gaps tend to appear.
The Last-Click Problem Is Worse Than Most People Realise
I want to spend more time on last-click because it is still the dominant model in practice, even among sophisticated marketing teams. The damage it does to budget allocation is cumulative and slow-moving, which makes it hard to see until something breaks.
Here is what typically happens. A brand runs paid search, paid social, display, and email. In the last-click report, paid search looks exceptional. It has high conversion rates, low cost per acquisition, and strong return on ad spend. Paid social looks mediocre. Display looks terrible. Email looks decent. So over the next few planning cycles, the brand shifts budget toward paid search and away from social and display.
For a while, nothing changes. Paid search continues to perform well. Then, gradually, the volume starts to drop. Branded search terms get fewer impressions. New customer acquisition slows. The brand starts to feel like it is fishing in a smaller and smaller pond. What happened? The channels that were filling the top of the funnel, and feeding branded search with intent, were defunded. Paid search was not creating demand. It was capturing demand that other channels had already created. When those channels were cut, the demand dried up.
I have seen this play out more than once. Not as a theoretical risk, but as a real commercial problem that took months to diagnose because the attribution data was pointing in the wrong direction the entire time. Unbounce has a clear-eyed piece on making analytics useful rather than just accurate that touches on exactly this kind of structural problem.
Data-Driven Attribution: More Objective, or Just More Opaque?
The shift to data-driven attribution in GA4 has been broadly welcomed, and I understand why. It feels more scientific than assigning credit by a rule someone invented. But I have some reservations worth sharing.
First, data-driven attribution requires volume to work well. If you are running a B2B SaaS business with 50 conversions a month, the model does not have enough data to produce reliable outputs. GA4 has minimum thresholds for data-driven attribution, and if you fall below them, the model either reverts to last-click or produces outputs that look precise but are not.
Second, data-driven attribution can only credit what it can see. If a customer saw a TV ad, read a review on a third-party site, and then converted via a Google search, the model sees only the search. The TV and the review are invisible. The model will credit search with the conversion, not because search deserves full credit, but because it is the only touchpoint the model has access to. This is not a flaw in the model’s logic. It is a flaw in the data it is working with. For teams considering exporting GA4 data to get more control over this, Moz’s Whiteboard Friday on GA4 and BigQuery is worth watching.
Third, the opacity of machine learning models makes them hard to interrogate. With a rule-based model, you can explain exactly why a channel received the credit it did. With data-driven attribution, you are trusting an algorithm you cannot fully inspect. That is not necessarily a reason to avoid it, but it is a reason to treat its outputs with the same critical eye you would apply to anything else.
Incrementality: The Closest Thing to a Real Answer
If attribution models are all approximations, what gets you closer to the truth? Incrementality testing is the most rigorous answer available to most marketing teams, even if it is impractical to run continuously.
An incrementality test asks a different question from attribution modelling. Attribution asks: which channel gets credit for this conversion? Incrementality asks: would this conversion have happened without this channel? The second question is more useful, because it gets at causation rather than correlation.
The simplest form of incrementality testing is a geo holdout test. You run your campaign in some markets and not others, then compare conversion rates between the two groups. If the markets where you ran the campaign converted at a meaningfully higher rate, the campaign was incremental. If conversion rates were similar in both groups, the campaign was largely capturing demand that would have existed anyway.
I ran a version of this kind of thinking early in my career, before it had a formal name. When I was at lastminute.com, we launched a paid search campaign for a music festival and saw six figures of revenue within roughly a day. The numbers were striking, but the honest question was how much of that would have come through organic search or direct traffic anyway. We did not have the tools to answer that rigorously at the time. The instinct to ask it, though, is exactly what incrementality testing formalises.
Incrementality testing has limitations. It requires sufficient scale to produce statistically meaningful results. It is time-consuming to set up properly. And it gives you a snapshot answer rather than a continuous one. But it is the most honest measurement available, and running even occasional tests will calibrate your attribution model outputs in ways that improve your budget decisions.
Media Mix Modelling: Useful Context, Not a Silver Bullet
Media mix modelling, or MMM, takes a different approach to attribution entirely. Rather than tracking individual user journeys, it uses statistical regression to model the relationship between marketing spend and business outcomes at an aggregate level. It can incorporate channels that are invisible to digital attribution, including TV, radio, print, and out-of-home, and it can account for external factors like seasonality, pricing changes, and competitor activity.
For large brands with significant offline spend, MMM is genuinely valuable. It provides a view of marketing effectiveness that no digital attribution tool can replicate. Forrester’s perspective on what marketing reporting should and should not try to do is a useful frame for understanding where MMM fits versus where simpler approaches are sufficient.
The limitations are significant, though. MMM requires a long historical data series to work well, typically two or more years of weekly data. It is expensive to build and maintain properly. And it operates at a level of aggregation that makes it difficult to use for tactical campaign decisions. It is a strategic planning tool, not an optimisation tool.
Most businesses do not need MMM. Most businesses need a cleaner version of what they already have: a sensible attribution model, some incrementality testing, and the discipline to not treat either as gospel.
How to Choose an Attribution Model That Fits Your Business
There is no universal right answer here, but there are some principles that hold across most situations.
Match the model to your sales cycle. If you sell something with a short, simple purchase experience, a last-click or time-decay model may be adequate. If you sell something with a long, complex experience involving multiple touchpoints over weeks or months, you need a model that accounts for the full path. Last-click will systematically mislead you.
Be honest about your data quality. Attribution models are only as good as the tracking behind them. If your GA4 implementation has gaps, if you are missing UTM parameters on campaigns, or if you have cross-device tracking blind spots, your attribution data is already compromised before you choose a model. Semrush has a solid overview of the metrics that matter across the funnel, which is a useful reference for thinking about what you actually need to track before worrying about attribution models.
Use multiple models in parallel. Rather than committing to one model and treating it as definitive, run two or three simultaneously and look for where they agree and disagree. The disagreements are where the interesting questions live. If last-click and data-driven attribution give you very different credit distributions for paid social, that is worth investigating rather than resolving by picking one.
Do not let your attribution model make your budget decisions. Attribution should inform budget decisions, not automate them. A model that shows paid search driving 60% of conversions does not mean paid search should receive 60% of budget. It means paid search is visible in the data at the moment of conversion. What happens before that moment, and whether it would happen without other channels, is a separate question that the attribution model cannot answer on its own.
Revisit your model periodically. Channel mix changes. Customer behaviour changes. A model that was appropriate two years ago may be producing distorted outputs today. I have seen brands run on the same attribution setup for three or four years without questioning whether it still reflected how their customers actually bought. It usually did not.
The Channels That Attribution Gets Wrong Most Often
Some channels are structurally disadvantaged by most attribution models, and it is worth knowing which ones before you make budget decisions based on attribution data alone.
Paid social is the most commonly undercredited channel in digital attribution. People see ads on social platforms while scrolling, not while actively searching for something to buy. They may click later, or they may search directly, or they may come back through a different channel entirely. The connection between the social ad and the eventual conversion is real but rarely captured in click-based attribution. Crazyegg’s breakdown of engagement metrics across digital channels touches on this cross-channel complexity.
Display advertising suffers from the same problem, compounded by the fact that most display interactions are impressions rather than clicks. View-through attribution attempts to address this, but it is methodologically contested and easy to manipulate. The honest answer is that display is very hard to attribute accurately, which does not mean it is not working.
Content marketing and SEO are almost always undervalued in attribution models. A piece of content that ranks well and introduces thousands of people to your brand over months will rarely get credit in a last-click report, because the people it introduced will eventually convert through a different channel. The content’s contribution is real. It is just invisible to the model.
Branded paid search is almost always overcredited. It captures demand that other channels created. Turning it off entirely is rarely the right answer, but treating its strong attribution numbers as evidence that it is your most effective channel is a mistake I have seen made repeatedly.
What Good Attribution Practice Actually Looks Like
After twenty years of working across agencies, clients, and platforms, my view on attribution is fairly settled. Good attribution practice is not about finding the perfect model. It is about building a measurement system that gives you enough signal to make better decisions than you would make without it, while being honest about what it cannot tell you.
That means using a model that fits your business and your sales cycle. It means maintaining clean tracking so your data is worth attributing in the first place. It means running incrementality tests periodically to calibrate your model outputs against something closer to causal reality. And it means treating attribution data as one input into budget decisions, not the only one.
Forrester has written well about what to do once you have a dashboard, and the same logic applies to attribution: having the data is not the hard part. Knowing what to do with it, and what it cannot tell you, is where the real work is.
The Mailchimp resource on building a marketing dashboard is also worth reviewing if you are thinking about how attribution data should sit alongside other performance metrics in a reporting structure.
The marketers who get this right are not the ones with the most sophisticated attribution stack. They are the ones who understand what their model can and cannot see, and who build that uncertainty into how they interpret the data. That is a harder skill than implementing a tool. But it is the one that actually matters.
If you want to go deeper on the analytical thinking behind decisions like these, the Marketing Analytics hub covers everything from measurement frameworks to the practical realities of GA4 implementation in one place.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
