Ecommerce Attribution: Why Your Best Channel Is Probably Getting No Credit
Ecommerce attribution is the process of assigning credit to the marketing touchpoints that contributed to a sale. Done well, it tells you where to invest more. Done poorly, it tells you a confident-sounding story that happens to be wrong, and you make budget decisions accordingly.
Most ecommerce businesses are doing it poorly. Not because they lack data, but because the models they rely on were designed for a simpler internet and a shorter purchase cycle. The gap between what attribution reports and what actually drove revenue is wider than most marketers want to admit.
Key Takeaways
- Last-click attribution systematically over-credits paid search and under-credits upper-funnel channels, which distorts budget allocation over time.
- No single attribution model is correct. Each is a lens with blind spots, and treating any one as ground truth is where the errors compound.
- Cross-device journeys and iOS privacy changes have made touchpoint data materially incomplete, meaning your attribution model is working with a partial picture regardless of which one you choose.
- The most useful question is not “which channel gets credit?” but “what would happen to revenue if I cut this channel?” Those are different questions with different answers.
- Combining model-based attribution with periodic channel suppression tests gives you a more honest read than either approach alone.
In This Article
- Why Attribution Became Such a Mess
- What the Main Attribution Models Actually Do
- The Data Gaps That Make Every Model Unreliable
- Where Last-Click Causes the Most Damage
- How to Build a More Honest Attribution Setup
- The Incrementality Question Attribution Cannot Answer
- Attribution in a Privacy-First World
- What Good Attribution Practice Actually Looks Like
Why Attribution Became Such a Mess
When I ran a paid search campaign at lastminute.com for a music festival, we saw six figures of revenue within roughly a day. The tracking was simple: click, cookie, conversion. The path from ad to purchase was short, the attribution was clean, and the numbers made sense. That was the early 2000s. The internet has changed considerably since then.
Today, a customer might discover a product through a YouTube pre-roll, research it on their phone, get retargeted on Instagram, click a brand search ad a week later, and convert via a promo email. Each of those touchpoints played a role. But most attribution models will hand the trophy to the email or the paid search click, because those happened last and had a cookie attached.
The problem is not the technology. It is the underlying logic. Attribution models were built to answer a simple question in a complex environment, and the simplification creates systematic distortions that compound over time. You defund awareness channels because they show poor attributed ROI. Awareness drops. Branded search volume falls. Paid search efficiency declines. You increase paid search spend to compensate. The model tells you paid search is working. It is a loop that looks rational at every step and produces the wrong outcome at scale.
If you want a broader grounding in how to build measurement frameworks that hold up commercially, the Marketing Analytics hub at The Marketing Juice covers the full landscape, from GA4 configuration through to marketing mix modelling and beyond.
What the Main Attribution Models Actually Do
Before you can choose a model or challenge one, you need to understand what each is actually measuring. These are not interchangeable views of the same truth. They are different assumptions about how credit should be distributed.
Last-click gives 100% of the credit to the final touchpoint before conversion. It is simple, auditable, and almost certainly wrong for any business with a purchase cycle longer than a few minutes. It rewards channels that are good at closing, regardless of whether they initiated or influenced the decision.
First-click gives 100% of the credit to the first touchpoint. Better for understanding acquisition, but it ignores everything that happened between discovery and purchase. For long consideration cycles, that is a lot of ignored activity.
Linear distributes credit equally across all touchpoints. Fairer in principle, but it treats a display impression someone scrolled past in three seconds the same as a product page visit where they spent twelve minutes reading reviews. Equal is not the same as accurate.
Time decay gives more credit to touchpoints closer to the conversion. Logical for short purchase cycles. Less useful for categories where someone researches for weeks before buying, because it systematically undervalues the early touchpoints that put you in the consideration set.
Position-based (sometimes called U-shaped) gives the most credit to the first and last touchpoints and distributes the rest across the middle. A reasonable compromise, but the percentage splits are arbitrary. Why 40/40/20? Because someone decided that. It is not derived from your actual customer data.
Data-driven attribution uses machine learning to assign credit based on observed conversion patterns. Google’s version of this is now the default in GA4 and Google Ads. It sounds rigorous. In practice, it is a black box that optimises for Google’s conversion data, which has its own gaps and biases. Understanding how GA4 handles conversion events is worth doing before you trust any model built on top of it.
The Data Gaps That Make Every Model Unreliable
Every attribution model has a structural flaw that no configuration choice can fix: it can only work with the touchpoints it can see. And the touchpoints it can see are an increasingly incomplete subset of what actually happened.
iOS privacy changes removed a significant share of mobile tracking from the open web. Cross-device journeys, where someone discovers on mobile and converts on desktop, create gaps that cookies cannot bridge. Incognito browsing, ad blockers, and browser-level tracking prevention all reduce the signal. Word of mouth, PR, podcast mentions, out-of-home advertising, and organic social all influence purchasing decisions without leaving a trackable footprint in your attribution platform.
I have reviewed attribution reports for clients spending eight figures annually on digital media. The experience data in those reports covers, at best, a fraction of the actual touchpoints that led to a sale. The model looks complete because it has numbers in every column. But completeness of format is not the same as completeness of data.
This is not an argument against using attribution models. It is an argument against treating them as ground truth. They are useful directional tools. They are not accurate accounts of what happened.
The broader discipline of data-driven marketing is valuable precisely because it forces you to interrogate assumptions rather than accept outputs. Attribution is where that discipline is most frequently abandoned.
Where Last-Click Causes the Most Damage
Last-click attribution is still the default for a significant number of ecommerce businesses, either because no one has changed the setting or because the alternatives seem complicated. The damage it causes is not random. It follows a predictable pattern.
Paid search brand terms look exceptional because they capture purchase intent that already existed. Someone who was going to buy anyway types your brand name, clicks your ad, and converts. Last-click gives paid search full credit. The channel that built the brand awareness that made them search for you gets nothing.
Retargeting looks excellent for the same reason. It targets people who have already shown intent. It converts well. Last-click assigns it full credit. The prospecting campaign that brought those people to your site in the first place gets no credit, and often gets cut because its attributed ROAS looks weak.
Email performs well in last-click models because it is often the trigger that gets someone to act on an intention they have been building for days or weeks. The email did not create the intent. It activated it. Last-click cannot tell the difference.
Over time, a business running on last-click attribution tends to over-invest in the bottom of the funnel and starve the top. The funnel narrows. New customer acquisition slows. Repeat purchase rates and retargeting audiences shrink as fewer new customers enter the system. Revenue growth stalls and the attribution model offers no explanation, because it was never measuring the thing that was breaking.
How to Build a More Honest Attribution Setup
The goal is not perfect attribution. Perfect attribution is not achievable. The goal is honest approximation, with enough signal to make better decisions than you would make without it.
Start with data hygiene. Attribution models are only as good as the data they run on. Consistent UTM parameter discipline across every paid channel is non-negotiable. If your campaign tagging is inconsistent, your attribution data is noise dressed up as insight. Exporting your GA4 data to BigQuery for cleaner analysis is worth the setup cost if you are running meaningful media budgets. The case for doing this is straightforward once you have tried to do serious analysis inside the GA4 interface.
Run multiple models in parallel. Do not pick one and treat it as correct. Run last-click, linear, and data-driven simultaneously and look at where they disagree. The disagreements are where the interesting questions are. If a channel looks strong in linear but weak in last-click, that is a signal worth investigating, not averaging away.
Use channel suppression tests selectively. Pause a channel for a defined period, hold everything else constant, and measure what happens to overall revenue. This is not sophisticated incrementality testing, but it gives you a real-world signal that no model can replicate. I have done this with clients who were convinced their retargeting was driving incremental revenue. When we paused it for two weeks, overall conversion rates barely moved. The channel was capturing demand, not creating it. That finding changed how the budget was allocated.
Bring in self-reported attribution. A simple post-purchase survey asking “how did you first hear about us?” costs almost nothing to implement and captures the word-of-mouth, podcast, and organic social signals that no tracking pixel will ever see. It is qualitative, it is imprecise, and it is genuinely useful. Understanding which metrics actually matter for your business model helps you decide where to invest in measurement infrastructure and where a lighter-touch approach is sufficient.
Set attribution windows that match your actual purchase cycle. A business selling considered purchases with a three-week research phase should not be running a seven-day attribution window. The window should reflect customer behaviour, not platform defaults. Most platforms default to windows that flatter their own performance. Adjust them deliberately.
The Incrementality Question Attribution Cannot Answer
The most important question in ecommerce measurement is not “which channel gets credit?” It is “what would have happened if this channel did not exist?” Those are fundamentally different questions, and attribution models only attempt to answer the first one.
Incrementality asks whether a channel is actually generating revenue that would not have occurred otherwise, or whether it is capturing revenue that would have happened anyway through a different path. A channel can have excellent attributed ROAS and zero incrementality. This is more common than the industry acknowledges.
Branded paid search is the most obvious example. If someone searches for your brand name and you show them a paid ad, they were almost certainly going to find you organically anyway. The attributed revenue is real. The incremental revenue is close to zero. You are paying to intercept your own organic traffic. The attribution model cannot tell you this because it has no counterfactual. It only knows what happened, not what would have happened differently.
Early in my agency career, I managed a client who was spending heavily on branded search and reporting strong ROAS. When we ran a geo-based hold-out test, pausing branded search in one region while maintaining it in another, the conversion rate difference was negligible. The channel was not driving incremental revenue. It was collecting it. We reallocated that budget to prospecting and new customer acquisition grew materially over the following quarter. Attribution had given us a false sense of efficiency.
Formal incrementality testing requires more planning and statistical rigour than most ecommerce teams apply to it, but the principle is accessible to any business. Pause a channel in a defined geography or audience segment. Measure the gap. Draw a conclusion. Repeat. Over time, you build a picture of which channels are genuinely additive and which are expensive ways of taking credit for demand that existed independently.
Attribution in a Privacy-First World
The direction of travel is clear. Third-party cookies are being phased out across more browsers. Platform-level tracking restrictions are tightening. Consent frameworks are reducing the share of users who can be tracked at all. The attribution data available to ecommerce businesses in three years will be materially less complete than it is today.
This is not a crisis. It is a correction. The industry became over-reliant on granular tracking data precisely because it was available, not because it was necessary. Businesses were making marketing decisions before cookies existed. They will make them after cookies are gone.
The businesses that will handle this transition best are the ones that have already built a measurement approach that does not depend on perfect data. They use multiple signals. They test rather than model. They combine platform data with first-party signals and periodic qualitative research. They have a healthy scepticism of any single number that claims to explain their marketing performance.
Server-side tagging is worth implementing if you have not already. It improves data capture without relying on browser-side cookies, and it gives you more control over what data is collected and where it goes. It is not a complete solution to the privacy challenge, but it is a meaningful improvement over client-side tracking alone. The history of conversion tracking shows how much the technical infrastructure has evolved, and it will continue to evolve in response to privacy constraints.
First-party data is the other piece. Email lists, loyalty programmes, customer accounts, and CRM data give you a view of your customers that does not depend on third-party tracking. The businesses that have invested in building first-party data assets are better positioned for the next phase of measurement than those that have relied entirely on platform pixels and third-party cookies.
What Good Attribution Practice Actually Looks Like
Over twenty years of working across ecommerce businesses of different sizes and categories, the ones with the most commercially grounded measurement setups share a few consistent characteristics. None of them have solved attribution. All of them have learned to work with it honestly.
They treat attribution data as one input among several, not as the answer. They combine platform attribution with revenue data, customer surveys, and periodic channel tests. They are explicit about what each data source can and cannot tell them.
They review attribution model assumptions regularly. A model that made sense when the business was running three channels needs revisiting when it is running twelve. A window that worked for a seven-day purchase cycle needs revisiting if the product range has expanded to include higher-consideration items.
They resist the pressure to report the most flattering number. Attribution data can be manipulated, not through fraud but through model selection, window length, and channel inclusion choices that make the marketing look better than it is. The businesses with the best measurement cultures are the ones where the marketing team is willing to report an honest picture, even when it is uncomfortable.
They invest in data infrastructure proportionate to their media spend. A business spending fifty thousand a month on digital media does not need the same measurement architecture as one spending five million. But both need to understand what their attribution model is and is not telling them. The discipline of choosing the right metrics applies as much to attribution as it does to content performance. Measuring the wrong thing with precision is worse than measuring the right thing approximately.
For more on building a measurement framework that holds up to commercial scrutiny, the full range of topics covered in the Marketing Analytics section of The Marketing Juice includes everything from GA4 fundamentals to marketing mix modelling and the metrics worth cutting from your dashboard entirely.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
