Marketing Performance Analysis: Most of It Measures the Wrong Things
Marketing performance analysis is the process of evaluating how well your marketing activity drives business outcomes, not just channel metrics. Done properly, it connects spend to revenue, isolates what is genuinely working from what is riding existing demand, and gives leadership a defensible basis for budget decisions.
Most teams are not doing it properly. They are measuring activity, reporting outputs, and calling it performance. The distinction matters more than most marketing leaders want to admit.
Key Takeaways
- Most marketing performance analysis measures activity and channel outputs, not business outcomes. The gap between the two is where bad decisions get made.
- Lower-funnel performance metrics are routinely over-credited. Much of what paid search and retargeting claim to drive would have happened without them.
- Incrementality, not last-click attribution, is the honest question in performance analysis. Few teams ask it seriously.
- Fixing measurement does not require a perfect data stack. It requires honest approximation and the willingness to challenge what the numbers appear to show.
- The businesses that get this right treat performance analysis as a strategic function, not a reporting function.
In This Article
- Why Most Performance Analysis Gets the Question Wrong
- What Marketing Performance Analysis Actually Requires
- The Attribution Problem Has Not Been Solved
- How to Build a Performance Framework That Is Actually Useful
- Incrementality Testing: The Honest Version of Performance Analysis
- The Reach Problem That Most Performance Analysis Ignores
- What Good Performance Reporting Looks Like in Practice
- The Technology Layer: Useful Tools, Not Magic Solutions
Why Most Performance Analysis Gets the Question Wrong
Early in my career, I was as guilty of this as anyone. I ran performance teams that were genuinely excellent at optimising campaigns within the channel. Click-through rates, quality scores, cost-per-acquisition, return on ad spend. We were rigorous about all of it. And we were answering the wrong question.
The question we were answering was: how efficiently is this channel operating? The question we should have been answering was: how much of this business outcome would not have happened without us?
Those are not the same question. Not even close.
When I started looking at performance through the lens of incrementality rather than attribution, it changed how I thought about almost everything. Branded paid search capturing people who were already going to buy. Retargeting following people around the internet after they had already decided to convert. Lower-funnel activity that looked brilliant in the channel dashboard and was, in many cases, a sophisticated way of taking credit for demand that existed independently of the spend.
This is not a niche problem. It is one of the most persistent distortions in marketing measurement, and it shapes billions of pounds in budget decisions every year. Lower-funnel marketing is not without value, but its value is consistently overstated when the measurement framework is built around attribution rather than incrementality.
If you want to understand marketing performance with any real honesty, the first thing to do is separate “this channel reported a conversion” from “this channel caused a conversion.” Most dashboards do not make that distinction. Most performance reviews do not either.
What Marketing Performance Analysis Actually Requires
Good performance analysis requires three things that most teams either skip or underinvest in.
First, a clear definition of what you are trying to measure. Not “marketing performance” as an abstract concept, but specific business outcomes with specific time horizons. Revenue growth, customer acquisition, retention rate, category share. The more precisely you define the outcome, the more honest the analysis becomes.
Second, a measurement framework that distinguishes between correlation and causation. This is where most teams fall down. They see a spike in conversions during a campaign period and attribute it to the campaign. They do not ask whether the spike would have happened anyway, whether other factors were in play, or whether the campaign was simply present during a period of organic demand.
Third, the institutional willingness to act on what the analysis shows, even when it is uncomfortable. I have sat in more post-campaign reviews than I can count where the data pointed clearly to underperformance, and the room collectively decided to reframe the numbers rather than confront them. That is not analysis. That is theatre with a spreadsheet.
If your performance analysis process is primarily about justifying decisions already made, it is not a strategic function. It is a reporting function dressed up as something more rigorous.
The Attribution Problem Has Not Been Solved
I want to be direct about this because a lot of vendors will tell you otherwise. Attribution has not been solved. The models have become more sophisticated. The data inputs have multiplied. The dashboards are considerably more impressive than they were fifteen years ago. But the fundamental problem, which is that a customer’s path to purchase is too complex and too human to be accurately captured by any single model, remains unsolved.
Last-click attribution was always a fiction. First-click attribution was a different fiction. Data-driven attribution is a more plausible fiction, but it still operates within the walled gardens of individual platforms, each of which has a commercial incentive to show you that its channel deserves more credit.
When I was managing significant ad spend across multiple channels, one of the most instructive exercises I ever ran was a simple overlap analysis. We pulled the conversion paths from three major platforms and looked at how many conversions each platform was claiming credit for. The total claimed conversions across the three platforms was roughly 2.4 times our actual sales volume. Every platform thought it was the decisive touchpoint. They could not all be right. Most of them were wrong.
The honest response to the attribution problem is not to find a better model and declare it solved. It is to hold your attribution data lightly, triangulate it against other signals, and build your decision-making on a portfolio of evidence rather than any single measurement source. Tools like Hotjar’s behavioural analytics can add a qualitative layer to quantitative channel data, helping you understand what users are actually doing rather than what the funnel model assumes they are doing.
Marketing performance analysis, done honestly, is an exercise in triangulation and approximation. Not precision. Anyone selling you precision is selling you something the data cannot actually deliver.
This connects to a broader point about how marketing teams use data. The market research and competitive intelligence work that informs strategic planning faces the same challenge: the data gives you a perspective on reality, not reality itself. The best analysts know the difference and build their conclusions accordingly.
How to Build a Performance Framework That Is Actually Useful
A useful performance framework starts with the business model, not the channel mix. What does the business need marketing to do? Acquire new customers? Retain existing ones? Grow average order value? Defend market share? The answer shapes everything downstream.
From there, you need to establish what I think of as the measurement hierarchy. At the top, you have business outcomes: revenue, profit, customer numbers, market share. These are the things that actually matter to the P&L. Below that, you have leading indicators: things that reliably predict business outcomes, even if they are not outcomes themselves. New customer acquisition rate. Brand consideration scores. Share of search. These are useful because they give you signal earlier than the business outcomes do.
Below that, you have channel metrics: impressions, clicks, cost-per-click, conversion rate, ROAS. These are useful for optimising channel execution. They are not useful for evaluating whether the channel is worth having in the mix at all, because they cannot tell you what would have happened without the spend.
The mistake most teams make is treating channel metrics as if they are business outcomes. They are not. A paid search campaign with a strong ROAS might be capturing people who would have found you organically anyway. A brand campaign with no measurable short-term conversion impact might be building the consideration that makes all your lower-funnel activity possible. The channel metric tells you neither of these things.
When I was running agency operations, one of the disciplines I tried to build into every client relationship was a regular “so what” review. Not a performance review in the traditional sense, but a session where we asked: given everything we are seeing in the data, what would we do differently, and why? If the answer was “nothing, because everything looks good,” that was usually a sign we were not asking hard enough questions.
Incrementality Testing: The Honest Version of Performance Analysis
If you want to know whether your marketing is actually causing the outcomes you are reporting, incrementality testing is the closest thing to an honest answer available to most teams.
The principle is straightforward. You create a test group that is exposed to your marketing activity and a control group that is not, then compare the outcomes. The difference between the two groups is your incremental lift, which is the closest approximation of what your marketing actually caused rather than simply correlated with.
In practice, running clean incrementality tests is harder than it sounds. You need sufficient volume to achieve statistical significance. You need to prevent contamination between test and control groups. You need to account for external variables that might affect both groups differently. And you need to run the tests long enough to capture the full effect, which is particularly important for brand activity where the impact plays out over weeks or months rather than days.
But even imperfect incrementality testing is more honest than attribution modelling, because it is at least asking the right question. I have seen teams run basic geo-split tests, pausing activity in one region while maintaining it in another, and discover that their retargeting spend was generating almost no incremental lift at all. The conversions in the paused region barely moved. That is an uncomfortable finding. It is also an extraordinarily valuable one.
Customer surveys are another underused tool in this space. Post-purchase surveys asking customers how they heard about you, or what prompted them to buy now, can provide signal that no attribution model can replicate. It is self-reported data, and it has its own limitations, but it gives you a qualitative check on what the quantitative data is telling you.
The Reach Problem That Most Performance Analysis Ignores
There is a structural bias in most performance analysis frameworks that I think deserves more attention than it gets. Because lower-funnel activity is easier to measure, it tends to dominate performance conversations. Because brand and reach activity is harder to measure, it tends to get undervalued or cut when budgets tighten.
This creates a compounding problem. You optimise towards what you can measure. What you can measure is predominantly the bottom of the funnel. So you invest more in the bottom of the funnel. Over time, you shrink the pool of people who are even aware of your brand, which means there are fewer people entering the funnel at the top, which means lower-funnel performance eventually deteriorates. But by the time that happens, the connection to the original measurement bias is invisible.
Think about it in simple terms. Someone who walks into a clothes shop and tries something on is far more likely to buy than someone browsing the window. Lower-funnel marketing is brilliant at converting the person who is already in the shop. But if you stop doing anything to bring people through the door in the first place, eventually there is nobody left to convert. The performance metrics look fine right up until the moment they do not.
I saw this play out in real time with a client who had progressively shifted budget away from brand activity and into paid search over a three-year period. Short-term ROAS looked excellent throughout. Then branded search volume started declining, organic traffic softened, and the cost of acquisition through paid search began climbing because they were competing harder for a shrinking pool of people who already knew them. Reversing that took eighteen months and a significant budget commitment that would not have been necessary if the measurement framework had been asking better questions from the start.
Effective campaign planning requires holding both the short-term and long-term performance picture simultaneously. Most teams are structurally better at the former than the latter, because the measurement infrastructure rewards it.
What Good Performance Reporting Looks Like in Practice
A performance report that is genuinely useful to a senior leadership team has a different shape from a standard marketing dashboard. It leads with business outcomes, not channel metrics. It contextualises performance against market conditions, not just against previous periods. It distinguishes between what the data shows and what the data means. And it includes a clear recommendation, not just a summary of what happened.
The best performance reviews I have been part of were the ones where the marketing team came in with a point of view. Not “here is what the data shows” but “here is what we think is happening, here is the evidence for it, and here is what we think we should do about it.” That is a different posture from reporting, and it is a more valuable one.
It also requires a level of intellectual honesty that is not always comfortable. If the data suggests that a significant portion of your performance marketing spend is not generating incremental value, saying so in a leadership meeting takes nerve. But it is the kind of honesty that builds long-term credibility for the marketing function, and it is the kind of analysis that actually drives better business decisions.
One practical structure I have used: open with the business outcome headline, then give the two or three most important things driving it (positive or negative), then the channel-level detail for those who want it, then a clear “what we are changing and why” section. That last section is the one most reports omit. It is also the one that demonstrates whether the analysis has been taken seriously.
For teams building out their broader research and measurement capabilities, the wider body of market research and competitive intelligence frameworks on this site covers how performance analysis fits into the larger strategic picture, from understanding market dynamics to tracking competitive positioning over time.
The Technology Layer: Useful Tools, Not Magic Solutions
The marketing technology stack available for performance analysis has expanded enormously over the past decade. There are tools for attribution modelling, media mix modelling, customer experience analysis, cohort tracking, and predictive analytics. Some of them are genuinely useful. None of them solve the fundamental measurement challenge.
What technology does well is automate data collection, surface patterns at scale, and reduce the manual effort of pulling together disparate data sources. What it does not do is tell you whether the patterns it surfaces are meaningful, or whether the data it has collected accurately represents what actually happened.
I have seen teams invest heavily in sophisticated analytics platforms and use them to produce more elaborate versions of the same misleading reports they were producing before. The technology did not change the quality of the analysis. It just made the misleading analysis look more credible.
The discipline that matters is not the tool you use. It is the questions you ask before you open the dashboard. What are we trying to understand? What would a meaningful result look like? What alternative explanations exist for what we are seeing? Those questions are independent of the technology, and they are the ones that determine whether the analysis is useful.
For teams investing in content and organic performance alongside paid, tools like generative engine optimisation frameworks are adding a new dimension to performance measurement as AI-driven search changes how visibility is tracked and attributed.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
