First Click vs Last Click Attribution: Why Both Are Lying to You
First click attribution gives all the credit for a conversion to the channel that started the experience. Last click attribution gives all the credit to the channel that ended it. Both models are simple, both are widely used, and both will systematically mislead you about where your marketing is actually working.
The question is not which one is more accurate. The question is whether you understand what each one is hiding, and whether you are making budget decisions based on a distorted picture of reality.
Key Takeaways
- Last click attribution systematically over-credits retargeting and brand search while starving awareness channels of budget they deserve.
- First click attribution over-invests in top-of-funnel channels and ignores the conversion work done further down the path.
- Single-touch models are not wrong because they are old, they are wrong because they assume one touchpoint causes a conversion when most journeys involve several.
- The right attribution model depends on your sales cycle, channel mix, and what decision you are trying to make, not on which model sounds most sophisticated.
- No attribution model tells you what would have happened without a given channel. For that, you need testing, not just reporting.
In This Article
- What Does First Click Attribution Actually Measure?
- What Does Last Click Attribution Actually Measure?
- Why Single-Touch Models Fail on Longer Sales Cycles
- The Multi-Touch Alternatives and Their Trade-offs
- How to Choose the Right Model for Your Business
- The Attribution Model Is Not the Measurement Strategy
- What Attribution Gets Wrong About Email
- The Practical Steps Worth Taking Now
What Does First Click Attribution Actually Measure?
First click attribution assigns 100% of the conversion credit to the first touchpoint in a customer’s experience. If someone clicks a display ad, then a paid search ad, then converts through an email link, the display ad gets all the credit. The paid search ad gets nothing. The email gets nothing.
This model has a coherent logic behind it. It answers the question: what introduced this customer to us? If you are trying to understand which channels are best at generating awareness and pulling new audiences into your funnel, first click gives you a reasonable directional signal.
The problem is that most marketers are not just trying to understand awareness. They are trying to understand performance. And first click attribution conflates “where did this person start” with “what made this person buy.” Those are very different questions.
When I was building out paid search programmes at scale, first click attribution made organic search look like a conversion machine and made retargeting look almost useless. Retargeting was rarely the first touchpoint. But when we stripped it out of the mix, conversion rates dropped noticeably. The model was not capturing what retargeting was actually doing, which was closing journeys that organic search had opened.
What Does Last Click Attribution Actually Measure?
Last click attribution assigns 100% of the credit to the final touchpoint before conversion. In Google Analytics 4, this is still the default model for many standard reports, though GA4 has introduced data-driven attribution as an alternative for those with sufficient conversion volume.
Last click has dominated digital marketing for twenty years for one simple reason: it is easy to defend. You can point to the channel, show the click, show the conversion, and draw a straight line. Finance teams like straight lines.
But last click attribution has a well-documented bias problem. It systematically over-credits channels that sit at the bottom of the funnel, particularly brand paid search and retargeting. These channels are often intercepting customers who were already going to convert. They are not creating demand, they are capturing it. Last click cannot tell the difference.
I have sat in budget reviews where the paid search team was claiming credit for revenue that email, content, and social had spent months building. Last click made their numbers look extraordinary. When we ran a proper channel analysis and looked at assisted conversions alongside last click, the picture shifted considerably. Paid search was still important, but it was not doing what the last click numbers implied.
If you want a cleaner view of how your channels interact across the full customer experience, the Marketing Analytics hub covers the frameworks and tools worth building into your measurement practice.
Why Single-Touch Models Fail on Longer Sales Cycles
Single-touch attribution models were designed in an era when digital journeys were simpler. Someone clicked a banner ad and bought something. One touch, one conversion, done. The model fit the behaviour.
Modern customer journeys do not look like that. A B2B buyer might interact with your brand across organic search, LinkedIn, a webinar, a retargeting ad, a direct visit, and a sales email before converting. A retail customer might see a social post, read a review, click a Google Shopping ad, abandon their cart, and then convert via a discount email three days later. Attributing that conversion to one touchpoint is not simplification. It is distortion.
The longer the sales cycle, the worse single-touch models perform. If you are selling something with a two-week consideration window, last click will tell you that whatever channel happened to be in front of the customer on day fourteen caused the sale. Everything that happened on days one through thirteen is invisible.
This is not a theoretical concern. When I was managing large-scale campaigns across multiple channels, we would regularly see situations where cutting a “low-performing” channel, based on last click data, would cause performance in other channels to deteriorate. The channels were interdependent. Last click could not show that relationship. It just showed who crossed the finish line.
For a broader look at how analytics tools handle multi-touch data, Semrush’s overview of Google Analytics covers how the platform approaches attribution and reporting across different models.
The Multi-Touch Alternatives and Their Trade-offs
Once you accept that single-touch models are incomplete, the obvious question is what to use instead. There are several multi-touch attribution models available in most analytics platforms, and each has its own logic and its own limitations.
Linear attribution splits credit equally across all touchpoints in the path. It is more democratic than first or last click, but it treats a display impression the same as a high-intent search click, which is a different kind of distortion.
Time decay attribution gives more credit to touchpoints that occurred closer to the conversion. This has intuitive appeal for shorter sales cycles, but it penalises awareness channels by design, which creates its own budget-allocation problems.
Position-based attribution, sometimes called the U-shaped model, gives 40% of credit to the first touch, 40% to the last touch, and distributes the remaining 20% across the middle. This acknowledges that both acquisition and conversion matter, which is a more honest starting point than either single-touch model.
Data-driven attribution, available in GA4 for accounts with sufficient conversion volume, uses machine learning to assign credit based on the actual contribution of each touchpoint relative to converting and non-converting paths. It is the most sophisticated option available at scale, but it is a black box. You cannot inspect its logic, and it requires enough data to be statistically meaningful. For smaller accounts, it defaults to last click anyway.
Rand Fishkin’s take on GA4 directional reporting at Moz is worth reading for anyone trying to understand how to interpret attribution data without over-trusting the numbers.
How to Choose the Right Model for Your Business
There is no universally correct attribution model. The right model depends on three things: your sales cycle length, your channel mix, and what decision you are trying to inform.
If you are running a short-cycle e-commerce business with a relatively simple channel mix, last click is imperfect but workable. The journeys are short enough that the distortion is limited. If you are running a B2B business with a three-month sales cycle and eight touchpoints in the average path, last click will actively mislead you.
If you are trying to understand which channels are best at generating new demand, first click gives you useful signal. If you are trying to understand which channels are best at closing deals, last click is more relevant. If you are trying to understand the full picture, you need to look at both, alongside assisted conversion data.
One practical approach I have used: run last click and position-based models side by side and look at the channels where the credit allocation diverges most. Those are the channels where your single-touch model is creating the most distortion. They are also the channels most likely to be over- or under-funded based on flawed data.
GA4’s custom reports make it possible to build this kind of comparative view without significant technical overhead. The Moz guide to GA4 custom reports is a useful starting point for building multi-model comparisons in practice.
The Attribution Model Is Not the Measurement Strategy
This is the point that gets lost in most attribution discussions. Marketers spend a lot of time debating which model to use, when the more important question is what they are trying to measure and why.
Attribution models are reporting tools. They organise historical data and assign credit according to a set of rules. What they cannot do is tell you whether a given channel caused a conversion or merely correlated with one. They cannot tell you what would have happened if you had removed a channel from the mix. They cannot separate incremental revenue from revenue you would have captured anyway.
I judged the Effie Awards for several years, which means I reviewed a lot of marketing effectiveness cases from some of the best campaigns in the market. The campaigns that stood out were not the ones with the most sophisticated attribution models. They were the ones where the teams had been honest about what they could and could not measure, and had built their evidence accordingly. Controlled experiments, holdout groups, channel suppression tests. Evidence, not just reporting.
Attribution models are a useful layer of the measurement picture. They are not the whole picture. If your measurement strategy begins and ends with choosing between first click and last click, you are working with a much thinner evidence base than you probably realise.
For teams building out their KPI reporting alongside attribution work, Semrush’s guide to KPI reporting covers how to structure performance dashboards in a way that connects channel data to business outcomes.
What Attribution Gets Wrong About Email
Email is the channel most consistently distorted by single-touch attribution, and it is worth addressing directly because the distortion runs in both directions depending on the model you use.
Under last click, email looks like a conversion powerhouse. Email is often the final touchpoint before a purchase, particularly for repeat customers and cart abandonment sequences. Last click loads enormous credit onto email, which makes email marketers look very effective and makes it difficult to justify investment in the channels that filled the funnel in the first place.
Under first click, email almost disappears. It is rarely the channel that introduces a new customer to a brand. First click strips email of nearly all its credit and loads it onto acquisition channels instead.
Neither picture is accurate. Email is a retention and conversion channel. Its value is in nurturing existing relationships and converting warm audiences. The right way to evaluate email is not through a single-touch attribution model at all. It is through metrics that reflect what email actually does: open rates, click-through rates, revenue per recipient, and the contribution to repeat purchase behaviour over time. HubSpot’s breakdown of email marketing reporting metrics is a practical reference for building an email measurement framework that goes beyond last click credit.
The Practical Steps Worth Taking Now
If you are currently making budget decisions based on a single attribution model, the most useful thing you can do is not switch models. It is to add a second model and look at where the numbers diverge.
In GA4, you can access the Advertising section and compare attribution models across your key conversion events. Look at which channels gain credit and which lose credit as you move between models. The channels where the gap is largest are the ones where your current model is creating the most distortion in your budget decisions.
Beyond model comparison, look at your assisted conversion data. Which channels appear frequently in converting paths without being the first or last touch? These are your middle-funnel contributors, and they are almost always under-credited by single-touch models.
If you have the budget and the data volume, run channel suppression tests. Turn off a channel for a defined period, hold everything else constant, and measure what happens to overall conversion volume. This is not perfect, but it gives you evidence that no attribution model can provide: what actually happens when a channel is absent.
Early in my career, I ran a paid search campaign for a music festival at lastminute.com that generated six figures of revenue within roughly a day. It felt like a clear, clean win for paid search. But when we looked at the data more carefully, a significant portion of those conversions had been touched by email and organic search earlier in the experience. Paid search was the last click. It was not the whole story. That lesson stayed with me across every attribution conversation I have had since.
If you are building a more complete measurement practice, the Marketing Analytics hub at The Marketing Juice covers the full range of frameworks, from attribution models to incrementality testing to the metrics worth keeping on your dashboard in 2025.
For teams evaluating analytics platforms alongside their attribution work, this comparison of Mixpanel and Google Analytics from Crazy Egg is worth reading for understanding where each platform’s attribution capabilities are strongest.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
