Sales Attribution: Stop Optimising the Map, Not the Territory

Sales attribution is the process of assigning credit to the marketing touchpoints that contributed to a conversion or sale. Done well, it tells you which channels, campaigns, and messages are genuinely driving revenue. Done poorly, it tells you a story you want to hear while quietly misdirecting your budget.

Most businesses are doing it poorly. Not because they lack tools, but because they’ve confused the model with reality.

Key Takeaways

  • Every attribution model contains assumptions. The model you choose shapes the decisions you make, often more than the data itself.
  • Last-click attribution systematically over-credits the bottom of the funnel and starves upper-funnel channels of investment they’ve earned.
  • Multi-touch models are more sophisticated but still rely on tracked touchpoints. If a channel isn’t tracked, it doesn’t exist in your attribution model.
  • The goal of attribution isn’t perfect credit allocation. It’s making better budget decisions than you would without it.
  • Attribution and incrementality answer different questions. You need both to build a defensible picture of marketing performance.

Why Attribution Models Are Always Wrong (And Still Worth Using)

I’ve sat in enough attribution conversations to know how they usually go. Someone pulls a report from Google Analytics, points at the last-click numbers, and uses them to justify cutting brand spend or doubling down on paid search. The numbers look authoritative. The decision feels data-driven. And the underlying logic is often completely broken.

Attribution models are approximations. They take a messy, non-linear, partially observable reality, which is how a human being actually decides to buy something, and compress it into a framework that fits inside a dashboard. That compression involves choices. Those choices have consequences.

Last-click says the final touchpoint gets all the credit. First-click says the first one does. Linear spreads it evenly. Time-decay weights recent interactions more heavily. Data-driven models use machine learning to allocate credit based on observed patterns. Each of these is a different answer to the same question, and none of them is objectively correct.

When I was running iProspect, we managed significant paid search budgets across dozens of clients. One of the first things I noticed was how often the attribution model in use had been set up at campaign launch and never revisited. Clients were making six-figure budget decisions based on a default setting nobody had consciously chosen. That’s not measurement. That’s measurement theatre.

What Last-Click Attribution Gets Wrong

Last-click attribution remains the most widely used model in digital marketing, partly because it was the default in Universal Analytics for years, and partly because it’s simple to explain. The channel that closed the sale gets the credit. Everything else gets nothing.

The problem is structural. Last-click systematically over-rewards channels that sit at the bottom of the funnel, particularly branded paid search and retargeting, while under-rewarding the channels that created the demand in the first place. Display, content, email newsletters, social, organic search on informational queries: all of these can do substantial work in moving someone from unaware to considering, and last-click gives them a zero.

I saw this play out at a client in the travel sector. Their last-click data showed branded paid search as their highest-ROI channel by a considerable margin. It looked like the obvious place to invest more. When we dug into the path analysis, we found that the vast majority of those branded paid search conversions had been preceded by organic content visits, email touchpoints, and display impressions. The branded paid search wasn’t creating intent. It was capturing intent that other channels had built. Cutting those upper-funnel channels to fund more branded search would have been a slow-motion disaster.

This is one of the core tensions in performance marketing that conversion tracking has never fully resolved: the tools that make it easy to measure the last step make it easy to ignore everything that came before it.

Multi-Touch Attribution: More Sophisticated, Still Incomplete

Multi-touch attribution models attempt to distribute credit across all the touchpoints in a conversion path, rather than awarding it all to one. In principle, this is more honest. In practice, it introduces a different set of problems.

The most significant is the tracking gap. Multi-touch attribution can only credit touchpoints it can see. If someone saw a YouTube ad, read a piece of organic content, heard about the brand from a friend, and then converted via a direct visit, your attribution model sees one touchpoint: the direct visit. The YouTube ad, the content, and the word-of-mouth recommendation are invisible. You’re distributing credit across an incomplete picture of the customer experience.

Cross-device behaviour makes this worse. A user who researches on mobile and converts on desktop looks, to most attribution tools, like two separate people. Cookie deprecation and increased privacy controls are shrinking the observable window further. GA4’s approach to measurement attempts to address some of this through modelled data and Google Signals, but it introduces its own assumptions and requires accepting that a portion of your reported conversions are estimates, not observations.

None of this means multi-touch attribution is useless. It’s materially better than last-click for most businesses with complex purchase journeys. But it needs to be treated as a directional indicator, not a precise ledger. The Forrester perspective on sales and marketing measurement makes a useful distinction here: measurement should inform alignment, not manufacture false precision that creates conflict between teams.

If you want to go deeper on the broader measurement landscape, the Marketing Analytics hub covers attribution alongside GA4, marketing mix modelling, and the metrics frameworks that hold it all together.

The Data-Driven Attribution Trap

Data-driven attribution, or DDA, has become the default model in Google Ads and GA4. The premise is appealing: rather than applying a fixed rule for credit allocation, the model uses machine learning to analyse conversion paths and assign fractional credit based on the actual contribution of each touchpoint.

In theory, this is the most sophisticated option available. In practice, there are three things worth understanding before you trust it uncritically.

First, data-driven attribution is a black box. You cannot inspect the logic. You cannot verify the weighting. You are trusting Google’s algorithm to fairly distribute credit across Google’s own channels, which is a conflict of interest worth keeping in mind.

Second, DDA requires sufficient conversion volume to function reliably. For accounts with low conversion counts, the model is working with a thin data set and its outputs will be noisier than they appear.

Third, and most importantly, data-driven attribution still only sees what Google can see. It cannot account for offline touchpoints, competitor activity, seasonal demand shifts, or anything that happens outside the Google ecosystem. Treating its outputs as a complete picture of your marketing performance is a category error.

I judged the Effie Awards for several years, and one thing that consistently separated the stronger entries from the weaker ones was intellectual honesty about what the measurement could and couldn’t prove. The best marketing teams weren’t claiming perfect attribution. They were triangulating across multiple signals and being transparent about the uncertainty. That approach is rarer than it should be.

How to Build a More Honest Attribution Framework

If attribution models are all approximations, the answer isn’t to abandon attribution. It’s to build a framework that acknowledges the approximation and uses multiple lenses to compensate for each model’s blind spots.

Here’s how I’d approach it in practice.

Start with clean data

Attribution is only as good as the tracking underpinning it. Inconsistent UTM parameters, missing campaign tags, and broken conversion events will corrupt any model you apply on top. UTM tracking discipline isn’t glamorous work, but it’s the foundation everything else depends on. Before you debate which attribution model to use, audit whether your data is clean enough to trust.

Use attribution models for channel-level direction, not precise credit

Multi-touch attribution is useful for understanding the relative contribution of different channel types across the funnel. It’s not reliable enough to make precise budget allocation decisions on its own. Use it to identify patterns and hypotheses, not to draw hard conclusions.

Layer in incrementality testing

Incrementality testing asks a different question from attribution. Instead of asking which touchpoints were present before a conversion, it asks whether the marketing activity actually caused additional conversions that wouldn’t have happened otherwise. Geo holdout tests, conversion lift studies, and ghost bidding experiments all fall into this category. They’re harder to run than pulling an attribution report, but they answer a more commercially relevant question.

Use self-reported attribution as a qualitative check

Asking customers “how did you hear about us?” in post-purchase surveys or during onboarding calls is unfashionable in an era of data-driven everything. It shouldn’t be. Self-reported attribution consistently surfaces channels, particularly word of mouth, podcast advertising, and offline touchpoints, that digital tracking misses entirely. It’s imprecise, but it’s a useful corrective to over-reliance on tracked data.

Track leading indicators alongside conversion data

Branded search volume, direct traffic trends, and organic visibility are all signals of brand health that attribution models don’t capture. If you’re running upper-funnel activity and want to understand whether it’s working, these metrics often show movement before conversion data does. Understanding the full range of marketing metrics available to you, rather than focusing exclusively on last-touch conversion data, gives you a more complete picture of what’s actually happening.

Where Attribution Fits in the Broader Measurement Stack

Attribution is one tool in a measurement stack, not the whole stack. Treating it as the definitive answer to “what’s working” is one of the most common and costly mistakes I see in performance marketing.

Marketing mix modelling sits above attribution in the measurement hierarchy. It operates at aggregate level, using statistical analysis to estimate the contribution of different marketing inputs to revenue outcomes. It’s less granular than digital attribution, but it captures things that attribution can’t: offline media, pricing effects, distribution changes, competitive activity. For businesses with significant offline spend or complex channel mixes, MMM provides context that no attribution tool can replicate.

KPI frameworks matter too. Attribution tells you which channels contributed to conversions. It doesn’t tell you whether those conversions were profitable, whether they came from high-value customers, or whether the revenue was incremental. Building KPI reports that connect attribution data to commercial outcomes, rather than treating conversion volume as the endpoint, is what separates measurement that informs strategy from measurement that just fills a dashboard.

Early in my career, I ran a paid search campaign for a music festival at lastminute.com. Six figures of revenue in roughly a day from a relatively simple campaign. The attribution was clean because the path was simple: click, book, done. Most businesses aren’t that clean. The longer the purchase cycle, the more channels involved, the more the attribution picture fragments. The answer to that fragmentation isn’t a better model. It’s a more honest framework that accepts the limits of what any model can tell you.

The measurement questions that attribution raises don’t exist in isolation. They connect to how you structure your analytics stack, how you report to senior stakeholders, and how you make the case for investment in channels that don’t close sales directly. All of that sits within the broader discipline covered across the Marketing Analytics hub, which is worth working through if you’re building or rebuilding your measurement approach.

The Organisational Problem Attribution Can’t Solve

There’s a dimension to attribution that rarely gets discussed in technical articles: the internal politics of credit.

In most organisations, different teams own different channels. The paid search team wants paid search to get credit. The content team wants content to get credit. The email team wants email to get credit. Attribution models become weapons in budget negotiations rather than neutral measurement tools. Whichever model allocates the most credit to your channel is the model you advocate for.

I’ve seen this dynamic play out at enterprise clients where the attribution model was changed three times in eighteen months, each time coinciding with a budget review cycle. The data didn’t change. The model changed to produce a number someone needed.

The solution isn’t a better attribution model. It’s a shared measurement framework that the whole organisation agrees on before the budget conversation starts, and that’s treated as a stable reference point rather than a moveable feast. Marketing analytics that serve business decisions, rather than channel-level advocacy, require that kind of organisational discipline. It’s harder to build than any technical solution, and more valuable when you get it right.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is sales attribution in marketing?
Sales attribution is the process of assigning credit to the marketing touchpoints that contributed to a sale or conversion. It helps marketers understand which channels, campaigns, and messages are driving revenue, so they can make more informed decisions about where to invest budget.
What is the difference between first-click and last-click attribution?
First-click attribution gives all the credit for a conversion to the first marketing touchpoint a customer interacted with. Last-click attribution gives all the credit to the final touchpoint before the conversion. Both are single-touch models that ignore everything in between, which makes them poor representations of complex customer journeys.
Is data-driven attribution more accurate than rule-based models?
Data-driven attribution uses machine learning to allocate credit based on observed conversion paths, which is more sophisticated than applying a fixed rule. However, it still only sees tracked touchpoints, requires sufficient conversion volume to function reliably, and operates as a black box. It’s a better approximation than last-click for most accounts, but it’s not a precise measurement of true marketing contribution.
How does attribution differ from incrementality testing?
Attribution asks which touchpoints were present in the path to conversion and allocates credit between them. Incrementality testing asks whether a specific marketing activity caused conversions that would not have happened otherwise. Attribution is descriptive. Incrementality is causal. Both are useful, and using them together gives you a more complete picture of marketing effectiveness than either approach alone.
Why does attribution matter for budget decisions?
The attribution model you use directly shapes which channels appear to be performing well and which appear to be underperforming. Last-click models tend to over-credit bottom-of-funnel channels like branded paid search, which can lead to under-investment in upper-funnel activity that builds demand. Choosing the wrong model, or not questioning the default, can systematically misdirect budget over time.

Similar Posts