Marketing Attribution Has a First Principles Problem
Marketing attribution has a first principles problem. Not a data problem, not a technology problem, and not a budget problem. The models most teams rely on are built on assumptions that were never sound to begin with, and the multi-channel allocation decisions flowing from those models are compounding the error at scale. If you want to make better decisions about where to put your money, you have to start earlier than the dashboard.
First principles thinking in attribution means stripping back to what you actually know, as opposed to what your platform is telling you. It means asking whether the logic underneath your measurement framework would survive scrutiny from someone who had never heard of last-click, data-driven, or any other attribution model your platform vendor is selling. In most cases, it would not.
Key Takeaways
- Attribution models are built on assumptions, not facts. The model you choose shapes the answer you get, which means the answer was never neutral.
- Multi-channel allocation decisions made from attribution data are often optimising for credit, not for causation. These are not the same thing.
- The channels that look most efficient in your attribution reports are frequently the ones best positioned to claim credit for work done elsewhere in the funnel.
- First principles thinking requires you to separate what the data shows from what the data means. Most teams skip the second step entirely.
- Honest approximation beats false precision. A measurement framework you understand and trust is worth more than a sophisticated model you cannot interrogate.
In This Article
- What Does First Principles Thinking Actually Mean in This Context?
- Why Attribution Models Cannot Tell You What You Need to Know
- The Assumption Stack Underneath Your Attribution Model
- How Multi-Channel Allocation Goes Wrong at the Decision Layer
- What a First Principles Approach to Attribution Actually Looks Like
- The Critical Thinking Gap in Marketing Measurement
- Where to Start If Your Attribution Setup Has Not Been Examined in a While
- The Honest Approximation Standard
What Does First Principles Thinking Actually Mean in This Context?
First principles thinking, in the philosophical sense, means reasoning from the most basic, defensible truths rather than from analogy or convention. In marketing measurement, it means asking: what do we actually know about how this channel contributes to a sale, and what are we assuming?
When I was running an agency and we grew from around 20 people to over 100, one of the most consistent patterns I saw was teams inheriting measurement frameworks from whoever set up the account before them, and never questioning whether those frameworks were fit for purpose. The last-click model was default. Conversion windows were whatever the platform suggested. Channel comparisons were made using metrics that were not remotely equivalent across platforms. Nobody had sat down and asked: what are we actually trying to measure, and does this setup measure it?
That is a first principles failure. Not a technical one. A thinking one.
If you are interested in the broader context of how analytics tools should be approached critically, the Marketing Analytics hub at The Marketing Juice covers the landscape from GA4 implementation through to measurement strategy, with the same commercially grounded perspective applied throughout.
Why Attribution Models Cannot Tell You What You Need to Know
Attribution models assign credit. That is their function. They take a conversion event and distribute some portion of the credit for that conversion across the touchpoints that preceded it, according to a set of rules or a statistical model. What they cannot do, structurally, is tell you whether any given touchpoint caused the conversion to happen.
This is not a flaw that better technology will fix. It is a definitional constraint. Causation requires a counterfactual: what would have happened if this touchpoint had not existed? Attribution models, including the more sophisticated data-driven variants, do not answer that question. They answer a different question: given that this conversion happened, how do we distribute credit across the path?
The distinction matters enormously for budget allocation. If you are moving spend toward channels that look efficient in your attribution reports, you may be moving spend toward channels that are good at claiming credit rather than channels that are driving incremental revenue. Paid brand search is the canonical example. It consistently looks excellent in attribution. It is also the channel most likely to be capturing intent that would have converted anyway.
I have sat in budget reviews where a team was confidently reallocating seven figures based on attribution data that had a fundamental flaw nobody had noticed: the conversion window on one channel was set to 90 days and another to 7 days. The comparison was meaningless. The decision was being made on noise dressed up as signal.
The Assumption Stack Underneath Your Attribution Model
Every attribution model rests on a stack of assumptions. Most teams are aware of the top layer, the model itself, but have never examined what is underneath it. Here is what that stack typically looks like.
First, there is the assumption that the touchpoints you can track represent the touchpoints that actually influenced the decision. They do not. Word of mouth, organic search visits that did not convert, podcast impressions, out-of-home exposure, a recommendation from a colleague: none of these appear in your attribution path. The path you see is the path you can measure, which is a subset of the path that actually happened.
Second, there is the assumption that the conversion window you have set captures the relevant decision period. For a considered B2B purchase with a six-month sales cycle, a 30-day window is not measuring the right thing. For a fast-moving consumer product, a 30-day window may be too long and attributing credit to touchpoints that were irrelevant by the time the purchase happened.
Third, there is the assumption that your tracking is complete and consistent across channels. In a post-cookie, cross-device world, it is not. GA4 handles session tracking differently from Universal Analytics, and many teams have not fully worked through what that means for their historical comparisons. The Moz team has a useful breakdown of what changed in GA4 and what it means for your data if you are still working through the transition.
Fourth, there is the assumption that the model you have chosen reflects how customers actually make decisions. Position-based models assume early and late touchpoints matter most. Linear models assume all touchpoints contribute equally. Neither of these is derived from your customer behaviour. They are conventions applied to your data.
When you stack all four of these assumptions together, you are not looking at a measurement of reality. You are looking at a model of a model, built on incomplete data, with a set of rules that were never validated against your specific market.
How Multi-Channel Allocation Goes Wrong at the Decision Layer
The attribution problem feeds directly into the allocation problem. If your attribution data is measuring credit rather than causation, your allocation decisions are optimising for the wrong thing.
Here is how this typically plays out. A team runs attribution analysis and finds that paid search has a cost per acquisition of £18, display has a CPA of £65, and social has a CPA of £42. The obvious conclusion is to shift budget from display and social toward paid search. The allocation decision follows the data.
But paid search, particularly branded paid search, is almost entirely a demand capture channel. It converts people who were already going to buy. Display and social, when they are working, are building the awareness and intent that paid search then captures. If you cut display and social, paid search may continue to look efficient for a quarter or two while it burns through the pipeline those channels built. Then volume drops and the team cannot explain why.
I have watched this exact cycle play out more than once. The team cuts upper-funnel spend based on attribution efficiency data, performance holds for a few months because there is residual demand in the pipeline, then it falls off a cliff. By the time the connection is made, the brand equity has eroded and the pipeline has dried up, and rebuilding it takes considerably longer than it took to destroy it.
Forrester has written about the limitations of standard marketing reporting frameworks and how the industry needs to rethink what measurement is actually for. The core argument, that reporting should inform decisions rather than justify past ones, is directly relevant here.
What a First Principles Approach to Attribution Actually Looks Like
A first principles approach does not mean abandoning attribution data. It means using it with appropriate scepticism and supplementing it with frameworks that address its structural limitations.
Start by writing down every assumption your current attribution setup makes. Not in a meeting, on paper, individually. What does your model assume about the conversion window? What touchpoints are you not tracking? What does your channel comparison assume about equivalence? Most teams have never done this exercise. It takes about an hour and it is consistently illuminating.
Then separate your channels by function. Demand creation channels and demand capture channels should not be evaluated on the same metrics. Asking whether a brand awareness campaign has a competitive cost per acquisition is the wrong question. Asking whether it is building measurable pipeline over a 90-day horizon is the right one. The metrics you use for email re-engagement, which Mailchimp covers well in their overview of core marketing metrics, are not the same metrics you should be using for top-of-funnel paid social.
Then build in a mechanism for testing causation rather than just measuring correlation. Geo holdout tests, where you run a channel in some markets and not others and compare outcomes, are the most accessible version of this. They are not perfect, but they give you something attribution cannot: a direct comparison between a world where the channel ran and a world where it did not.
The goal is not a perfect model. It is an honest one. A model you understand, with assumptions you have examined, producing data you can interrogate. That is worth considerably more than a sophisticated black box that produces confident-looking numbers you cannot explain.
The Critical Thinking Gap in Marketing Measurement
If I had to identify the single biggest problem in how marketing teams approach attribution and allocation, it would not be the tools, the data quality, or the budget. It would be the absence of critical thinking at the point where data becomes decision.
When I think about what I would teach a junior marketer in their first 30 days, critical thinking is at the top of the list. Not because it is a nice skill to have, but because without it, every other skill is operating on faulty inputs. A marketer who can build a beautiful dashboard but cannot interrogate what the dashboard is actually measuring is producing work that looks rigorous but is not.
Attribution is where this gap shows up most expensively. The numbers look precise. The visualisations are clean. The platform vendor has a name for the model and a certification programme to go with it. All of this creates an air of authority that discourages questioning. But the question that needs to be asked, does this model measure what we think it measures, is rarely asked at all.
I judged the Effie Awards, which evaluate marketing effectiveness rather than creative execution. The campaigns that held up under scrutiny were not the ones with the most sophisticated attribution setups. They were the ones where the team had been honest about what they knew, clear about what they were assuming, and disciplined about connecting their activity to business outcomes rather than platform metrics. That is a thinking discipline, not a technology one.
Unbounce has a practical breakdown of content marketing metrics that is worth reading for the way it distinguishes between metrics that indicate activity and metrics that indicate impact. The same distinction applies across every channel in your attribution model.
Where to Start If Your Attribution Setup Has Not Been Examined in a While
Practical starting points matter. If your attribution setup has been running on autopilot, here is a sequence that produces useful results without requiring a complete rebuild.
Audit your conversion windows first. Are they consistent across channels? Are they appropriate for your purchase cycle? A mismatch here invalidates any cross-channel comparison you are making. Fix this before anything else.
Second, map your tracked touchpoints against your actual customer experience. Talk to recent customers. Ask them how they heard about you, what they read before buying, who they spoke to. You will almost always find significant touchpoints that do not appear in your attribution data at all. This is not a reason to despair about your data. It is important context for how much weight to put on it.
Third, run a simple channel isolation test. Pick one channel that your attribution model rates as low-efficiency. Pause it in one market or segment while maintaining it in another. Measure the difference in conversion volume over 60 to 90 days. The result will not be definitive, but it will give you directional evidence that attribution alone cannot provide.
Fourth, stop comparing channels on a single metric. Build a channel evaluation framework that uses different metrics for different channel functions. Reach and brand recall for awareness channels. Engagement and pipeline contribution for consideration channels. Conversion rate and CPA for capture channels. The Crazy Egg team has a useful perspective on how to think about channel-specific metrics in the context of email, and the principle extends broadly.
Finally, document your assumptions. Write them down. Share them with your team. Review them quarterly. This sounds basic, but almost no teams do it. The act of writing down what you are assuming forces a level of scrutiny that looking at a dashboard never will.
There is more on building a measurement framework that holds up to this kind of scrutiny in the Marketing Analytics section of The Marketing Juice, covering everything from GA4 configuration to how to structure a measurement strategy that informs decisions rather than just reporting on them.
The Honest Approximation Standard
Marketing does not need perfect measurement. It never had it and it never will. What it needs is honest approximation: a clear-eyed view of what the data shows, what it does not show, and what assumptions are filling the gap between the two.
The industry has spent the last decade building increasingly sophisticated tools to produce increasingly precise-looking numbers. But precision in the output does not compensate for flawed assumptions in the model. A number that is wrong to three decimal places is still wrong.
The teams I have seen make the best allocation decisions are not the ones with the most advanced attribution technology. They are the ones who have thought hardest about what they are measuring, been honest about the limitations of their data, and built decision-making processes that account for uncertainty rather than pretending it does not exist.
That is a first principles discipline. It starts with a question that most teams skip: what do we actually know, and what are we assuming? Get that question into your regular measurement review and the quality of your allocation decisions will improve, not because your data got better, but because your thinking did.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
