Marketing Performance Assessment: What the Numbers Are Hiding
A marketing performance assessment is a structured review of how well your marketing activity is driving business outcomes, not just generating metrics. Done properly, it separates the work that genuinely moves revenue from the work that looks productive but changes very little.
Most assessments stop at channel performance, attribution dashboards, and cost-per-acquisition figures. That is the wrong place to stop. The numbers your tools produce are a perspective on reality, not reality itself, and the gap between the two is where most marketing budgets quietly disappear.
Key Takeaways
- Most marketing performance assessments measure activity and credit, not genuine business contribution. Fix the question before you fix the data.
- Attribution models are built to assign credit, not to reveal causation. Treating them as proof of impact is one of the most expensive mistakes in marketing.
- A large share of what performance marketing gets credited for would have happened anyway. Incrementality is the only measure that tells you what marketing actually caused.
- Reaching new audiences drives growth. Capturing existing intent optimises what you already have. Most businesses over-invest in the latter and wonder why growth stalls.
- Honest approximation beats false precision. The goal of a performance assessment is not a perfect dashboard, it is a clearer view of where money is working and where it is not.
In This Article
- Why Most Marketing Performance Assessments Produce the Wrong Answers
- The Attribution Problem Nobody Wants to Talk About
- What a Proper Marketing Performance Assessment Actually Covers
- The Metrics That Deserve More Weight Than They Get
- How to Structure a Marketing Performance Assessment
- The Honest Conversation Most Assessments Avoid
- Common Patterns That Signal a Broken Assessment Process
- What Good Looks Like
Why Most Marketing Performance Assessments Produce the Wrong Answers
I have sat in a lot of performance reviews over the years. The format is almost always the same: someone opens a dashboard, walks through channel-by-channel results, and the conversation centres on which numbers went up and which went down. The implicit assumption is that the numbers in the dashboard represent what marketing actually did to the business.
They rarely do.
When I was running an agency and managing significant media budgets across multiple clients, I started noticing a pattern. Clients who paused spend in certain channels, often for budget reasons, would see their attributed conversions drop in those channels. But their overall sales would barely move. The channel had been claiming credit for transactions that were going to happen regardless. The assessment process never caught it because nobody was asking the right question. The right question is not “how did each channel perform?” It is “what would have happened without this activity?”
That distinction changes everything about how you interpret results.
The Attribution Problem Nobody Wants to Talk About
Attribution models are built to assign credit across touchpoints. They are not built to measure causation. Last-click, first-click, linear, data-driven, they all share the same fundamental flaw: they start from the assumption that every touchpoint in the path contributed something, and then they argue about how much.
The harder question, the one attribution models cannot answer, is whether any of those touchpoints were necessary at all.
I spent a long stretch of my career overvaluing lower-funnel performance. Paid search looked brilliant on every dashboard I ran. The cost-per-acquisition was clean, the ROAS was strong, and the attribution model was generous. What I was slower to recognise was that a meaningful portion of those conversions were people who had already decided to buy. They searched, they clicked, we got the credit. But the decision had been made before they ever hit a search result. We were paying to confirm intent that already existed, not to create it.
This is not a reason to abandon performance channels. It is a reason to measure them honestly. Incrementality testing, holdout groups, and geo-based experiments are imperfect tools, but they are far more honest than attribution dashboards. If you have never run an incrementality test on your paid search spend, you do not actually know what it is contributing to your business.
If you want to understand how this connects to broader go-to-market thinking, the Go-To-Market and Growth Strategy hub covers the strategic frameworks that sit underneath these measurement questions.
What a Proper Marketing Performance Assessment Actually Covers
A useful assessment works across three levels. Most businesses only operate at the first.
Level 1: Channel and Campaign Performance
This is the standard layer: cost, volume, conversion rates, attributed revenue by channel. It is necessary but insufficient. It tells you how efficiently you are operating within each channel. It does not tell you whether those channels are driving net new business or recycling existing demand.
At this level, you are asking: are we running these channels competently? That is a legitimate question. It is just not the most important one.
Level 2: Business Contribution
This layer asks whether marketing activity is genuinely moving business outcomes. Revenue, new customer acquisition, retention, market share. Not attributed conversions, actual business results.
The test here is simple in concept and difficult in practice: if you removed this activity, what would change? If the honest answer is “probably not much,” that is important information. It means you are spending money on activity that is largely decorative from a business standpoint.
I have seen this play out in businesses that had sophisticated attribution setups and genuinely believed their marketing was working hard. When we dug into the actual business trajectory, growth was flat. The marketing was optimised, the dashboards were green, and the business was standing still. The assessment process had been measuring the wrong thing for years.
Level 3: Strategic Fit
This is the layer most assessments never reach. Is the marketing activity aligned with where the business needs to go, not just where it has been? Are you reaching new audiences and expanding your addressable market, or are you optimising within a pool of demand that is already as large as it is going to get?
Growth requires reaching people who do not yet know they want what you sell. That is a fundamentally different challenge from converting people who are already in-market. A performance assessment that only measures the latter will always produce a skewed picture of marketing’s contribution to the business.
The Metrics That Deserve More Weight Than They Get
Most marketing dashboards are built around efficiency metrics: CPA, ROAS, CTR, conversion rate. These are useful for operational decisions. They are poor proxies for business impact.
The metrics that tend to be underweighted in performance assessments are the ones that are harder to track but more directly connected to growth.
New customer rate. What proportion of your conversions are genuinely new customers versus existing customers or lapsed buyers? If this number is low, your marketing is largely serving people who already know you. That is not a growth engine, it is a retention mechanism dressed up as acquisition.
Category penetration. Are you reaching more of the available market over time? This is a harder number to track but it is the one that most directly predicts long-term growth. Brands grow by reaching more buyers, not by extracting more value from the ones they already have.
Share of voice versus share of market. Over a sustained period, these two numbers tend to track together. If your share of voice is declining relative to competitors, your share of market will follow. This is one of the more reliable leading indicators available to marketers, and it rarely appears in a standard performance dashboard.
Incrementality by channel. As noted above, this is the measure that tells you what marketing is actually causing rather than what it is claiming credit for. It is more expensive to measure properly, but it is the only number that gives you an honest read on what would happen if you cut spend.
BCG’s work on go-to-market strategy and pricing makes the point that businesses often optimise within existing segments rather than expanding their reach. The same dynamic applies to marketing measurement: we tend to measure what we are already doing rather than what we should be doing.
How to Structure a Marketing Performance Assessment
A structured assessment does not need to be a six-week consulting project. It needs to be honest, systematic, and connected to actual business questions. Here is how I approach it.
Start with the business, not the marketing. What were the business objectives for the period? Revenue targets, new customer goals, market expansion ambitions. Before you open a single dashboard, get clear on what marketing was supposed to be doing for the business. Then assess whether it did that, not whether it generated impressions or clicks.
Audit your measurement setup before you trust your data. Tracking errors, attribution window mismatches, and duplicate conversions are more common than most people admit. I have walked into client accounts where the attributed conversion volume was materially overstated because of double-counting across platforms. Before you draw conclusions from the data, make sure the data is clean.
Separate demand creation from demand capture. This is the most important structural distinction in a performance assessment. Activity that reaches new audiences and builds brand salience operates differently from activity that captures existing intent. They should be measured differently and evaluated against different benchmarks. Treating them as equivalent is how businesses end up systematically underfunding brand-building.
Test your assumptions about what is working. If a channel has never been subject to an incrementality test or a holdout experiment, you do not actually know what it is contributing. Pick your highest-spend channel and run a test. The results are often uncomfortable. They are also almost always useful.
Look at the trajectory, not just the snapshot. A single period’s results tell you very little. The pattern over six to twelve months tells you a great deal. Is new customer acquisition growing or shrinking as a proportion of total conversions? Is cost-per-new-customer trending up or down? Is the business growing faster or slower than the category? These are the questions that reveal whether marketing is genuinely contributing to growth.
Forrester’s research on agile marketing and scaling touches on a related challenge: organisations that optimise for speed and output often lose sight of whether the output is producing the right outcomes. The same trap exists in performance assessment.
The Honest Conversation Most Assessments Avoid
If businesses could retrospectively measure the true impact of their marketing on business performance, a lot of it would not survive the scrutiny. That is not a cynical view of marketing. It is an honest one. And it is the starting point for doing better.
I have judged marketing effectiveness awards, including the Effies, where the standard for entry is demonstrable business impact. The work that wins is not the work with the most sophisticated attribution model. It is the work where you can draw a credible line between the marketing activity and a business outcome that would not have happened otherwise. That standard is higher than most day-to-day performance reporting comes close to meeting.
The businesses that get this right tend to share a few characteristics. They are sceptical of their own dashboards. They ask uncomfortable questions about incrementality. They invest in measurement infrastructure not because it makes reporting easier but because it makes decision-making better. And they treat a performance assessment as a business conversation, not a marketing one.
Vidyard’s research on pipeline and revenue potential for go-to-market teams highlights how much unrealised value sits in the gap between marketing activity and actual revenue generation. The gap is almost always a measurement problem before it is anything else.
Common Patterns That Signal a Broken Assessment Process
After two decades of running and reviewing marketing operations, certain patterns reliably indicate that a performance assessment process is producing comfortable numbers rather than useful ones.
Every channel claims a positive ROAS. When every channel in your mix is showing a return, and the sum of those returns vastly exceeds the business’s actual revenue growth, your attribution model is lying to you. Channels share credit for the same transactions. If you add up the attributed revenue across all channels and it is three times your actual revenue, the numbers are not real.
New customer acquisition is invisible in the reporting. If your performance dashboard does not distinguish between new and returning customers, you cannot tell whether you are growing or recycling. This is a fundamental gap that most standard reporting setups do not address by default.
The assessment produces the same conclusions every quarter. If your performance review consistently identifies the same channels as winners and the same channels as underperformers, without that picture ever being challenged, you are probably measuring what you expect to find rather than what is actually happening.
Brand activity is excluded from the assessment entirely. This is common in businesses where performance marketing is managed separately from brand. The result is that brand investment is evaluated on a different standard, or not evaluated at all, which makes it easy to cut when budgets tighten and hard to defend when it is gone.
Semrush’s overview of growth approaches and their limitations is a useful reminder that tactical optimisation and genuine growth are not the same thing. A performance assessment that only measures the former will consistently overstate marketing’s contribution.
There is a broader point worth making here. The goal of a performance assessment is not to produce a report. It is to make better decisions. If the assessment process is not regularly surfacing uncomfortable findings and changing how budget is allocated, it is functioning as a reporting exercise rather than a decision-making tool. Those are very different things.
If you are working through how performance assessment fits into a broader growth strategy, the Go-To-Market and Growth Strategy hub covers the strategic context in more depth, including how measurement connects to market positioning and commercial planning.
What Good Looks Like
A well-run marketing performance assessment produces three things: a clear view of what is genuinely driving business outcomes, an honest picture of where spend is being wasted or misattributed, and a set of prioritised decisions about where to allocate resources differently.
It does not require a perfect measurement setup. Perfect measurement does not exist. What it requires is honest approximation: a willingness to ask hard questions about incrementality, to separate demand creation from demand capture, and to connect marketing activity to business results rather than just channel metrics.
The businesses I have seen do this well share one characteristic above all others: they are more interested in being right than in looking good. That sounds obvious. In practice, it is rarer than it should be.
BCG’s work on scaling and organisational effectiveness makes a point that applies directly here: the organisations that scale well are the ones that build honest feedback loops into their operating model. Marketing performance assessment is one of those feedback loops. When it works, it makes everything else in the marketing function work better.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
