Marketing Plan Performance: How to Know If It Worked
Evaluating how a specific marketing plan performed means more than pulling a dashboard and checking whether spend matched budget. It means asking whether the plan moved the business, reached the right people, and created conditions for future growth, not just whether the numbers looked acceptable at month end.
Most marketing reviews I have seen in 20 years answer the wrong question. They confirm activity rather than assess impact. This article sets out a more honest framework for post-plan evaluation, one that distinguishes between what the plan actually caused and what was going to happen anyway.
Key Takeaways
- Evaluating a marketing plan means separating what the plan caused from what would have happened without it, a distinction most reviews skip entirely.
- Lower-funnel metrics capture existing demand. A plan that only scores well on conversion data may have added little incremental value to the business.
- The most useful post-plan evaluation asks three questions: did we reach new audiences, did we change behaviour, and did the commercial outcome justify the investment?
- Honest evaluation requires a baseline, not just a target. Without knowing where you started and what external factors were in play, your results are almost impossible to interpret.
- Plans that look like they worked on paper sometimes mask structural business problems that marketing cannot fix and should not be asked to.
In This Article
- Why Most Marketing Post-Mortems Get It Wrong
- What a Baseline Actually Looks Like
- The Three Questions That Actually Matter
- Channel-Level Evaluation vs. Plan-Level Evaluation
- When the Plan Looks Like It Worked but the Business Has a Different Problem
- How to Structure the Evaluation Itself
- The Role of Agility in Mid-Plan Evaluation
- What Good Evaluation Changes About the Next Plan
Why Most Marketing Post-Mortems Get It Wrong
I spent several years earlier in my career in performance marketing, and I was very good at reporting results that looked impressive. ROAS was up. CPA was down. Conversion rate had improved quarter on quarter. The client was happy. I was happy. Everyone in the room was happy.
What I was less good at, in those years, was asking whether any of it was actually caused by the marketing. A lot of what performance channels get credited for is demand that already existed. Someone who was going to buy regardless searched a branded term, clicked a retargeting ad, and got counted as a conversion. The ad spend got the credit. The business got a slightly inflated sense of its own marketing effectiveness.
Post-plan reviews that rely primarily on last-click attribution or platform-reported ROAS are not evaluating whether the plan worked. They are confirming that money was spent and some sales occurred in the same period. Those two things are not the same.
A genuinely useful evaluation has to be more uncomfortable than that. It has to ask: what would have happened if we had not run this plan at all? It has to account for seasonality, competitor behaviour, category tailwinds, and the baseline trajectory the business was already on. Without that context, you cannot honestly assess performance.
If you are thinking about how post-plan evaluation fits into a broader commercial strategy, the go-to-market and growth strategy hub covers the wider framework, including how to set plans up for evaluation before they launch.
What a Baseline Actually Looks Like
Before you can evaluate a plan, you need a baseline. This sounds obvious. In practice, fewer than half the briefs I have received in agency leadership included one.
A baseline is not your target. It is a realistic projection of what would have happened without the marketing activity. It should account for organic growth trends, historical seasonality, any planned price changes or distribution shifts, and relevant external factors like category growth or macroeconomic conditions.
When I was running an agency that grew from around 20 people to over 100 during a period of sustained new business growth, one of the things I learned quickly was that a rising market can make average marketing look excellent. When the category is growing at 15% annually and your client grows at 12%, that is not a marketing success story. It is a slight underperformance dressed up with a positive number.
Conversely, holding flat in a declining category can represent a genuine marketing achievement, but it will never look like one if your evaluation framework only measures absolute sales figures against a target set at the start of the year.
Build your baseline before the plan runs. If you are evaluating retrospectively, reconstruct it as honestly as you can using pre-campaign trend data, market indices, and any available competitor benchmarks. It will not be perfect. It does not need to be. It needs to be honest.
The Three Questions That Actually Matter
Over time I have reduced post-plan evaluation to three questions. Everything else either feeds into one of these or is a vanity metric dressed up as analysis.
Did the plan reach people who were not already going to buy?
This is the hardest question, and the one most evaluations avoid. Reach metrics in isolation tell you nothing about whether you reached new audiences or just served ads to people who were already in-market. Share of search data, brand tracking surveys, and new-to-brand customer analysis can all help, but they require investment in measurement infrastructure that many businesses have not built.
I use a simple analogy when explaining this to clients. Think about a clothes shop. Someone who has already picked something up and tried it on is far more likely to buy than someone browsing the rails. If your marketing is only talking to people who are already holding the product, you are not building a pipeline. You are just adding friction to a sale that was already happening. Real growth requires reaching people earlier, when they are still on the rails, not already in the changing room.
Plans that score well on this question tend to have invested meaningfully in upper-funnel activity, brand awareness, and new audience acquisition, not just retargeting and branded search capture. Growth strategies that have scaled effectively almost always show evidence of new audience development rather than pure conversion optimisation.
Did the plan change behaviour?
Behaviour change is the mechanism through which marketing creates commercial value. Did people who saw the campaign buy more frequently? Did they buy a higher-margin product? Did they recommend the brand to someone else? Did lapsed customers return?
This is distinct from whether sales went up. Sales can go up because of a price promotion, a competitor going out of stock, or a seasonal spike that had nothing to do with your activity. Behaviour change is more durable. It shows up in repeat purchase rates, basket size, category penetration, and net promoter data over time.
When I have judged effectiveness awards, the entries that stand out are not the ones with the biggest headline sales numbers. They are the ones that can show a causal chain: the plan reached these people, changed this belief or behaviour, and that change produced this commercial outcome. That chain is hard to build. It is also the only thing that proves the marketing actually worked.
Did the commercial outcome justify the investment?
This seems like the most straightforward question. It is rarely answered honestly. Most ROI calculations in marketing use revenue rather than profit, ignore the cost of goods, and attribute all sales in the period to the campaign regardless of what else was happening. That is not ROI analysis. It is optimistic accounting.
A more honest calculation takes incremental gross profit generated by the plan, subtracts the fully-loaded cost of the marketing activity including agency fees, production, and internal resource, and then asks whether the net figure is better than what the same capital would have returned in its next best use. That is a harder test. It is the right one.
For longer sales cycles or brand-building activity, a single-period ROI calculation will always look poor. That is a legitimate measurement challenge, not a reason to avoid the question. The answer is to be explicit about time horizon, not to substitute engagement metrics for commercial ones.
Channel-Level Evaluation vs. Plan-Level Evaluation
One of the most common mistakes in post-plan reviews is evaluating each channel in isolation and then summing the results. This produces a number that is almost always inflated, because it counts the same customer experience multiple times and ignores the interactions between channels.
A customer might see a display ad, then a social post, then search a branded term and convert through paid search. Three channels will each claim some version of that conversion. Add them up and you have attributed one sale three times. The plan looks like it generated three times the value it actually did.
Plan-level evaluation treats the marketing activity as a system, not a collection of independent tactics. It asks what the total incremental commercial outcome was, and then tries to understand which elements of the plan contributed most to that outcome. Marketing mix modelling does this at scale. For smaller budgets, a simpler approach is to look at periods where individual channels were paused and assess what happened to overall performance in those windows.
The BCG work on go-to-market strategy makes the point that commercial outcomes depend on how all the elements of a plan interact, not on any single channel performing in isolation. That framing applies directly to how you evaluate performance after the fact.
When the Plan Looks Like It Worked but the Business Has a Different Problem
I have turned around two loss-making businesses in my career. In both cases, the marketing was not the primary problem. The product had issues, the pricing was wrong, or the customer experience was poor enough that marketing was essentially filling a leaking bucket. Spend more, acquire more, lose more at a faster rate.
Marketing plans can appear to perform well on their own metrics while the business continues to struggle. Acquisition numbers look good. Churn is quietly accelerating. The post-plan review congratulates everyone on a successful campaign while the CFO is looking at customer lifetime value data and wondering why growth is not translating into profit.
A complete post-plan evaluation always includes retention data. If the plan acquired customers who did not stay, the plan did not work, regardless of what the acquisition metrics say. Marketing that genuinely works creates customers who come back, spend more over time, and tell other people. Marketing that just moves people through a funnel once is a cost centre dressed up as a growth engine.
This is why I have always been sceptical of plans that treat customer acquisition as the primary success metric without equal weight on what happens to those customers afterwards. The most commercially effective marketing I have seen operates on the assumption that a delighted customer is worth ten acquired ones. If the product or service is not delivering that delight, no post-plan review will fix it, but an honest one will name it.
How to Structure the Evaluation Itself
A post-plan evaluation should be a structured document, not a slide deck optimised for the client relationship. It should be honest enough to be useful and specific enough to inform the next plan. Here is how I approach it.
Start with the original objectives. Not the KPIs, the objectives. What was the plan actually trying to achieve for the business? If that was not written down clearly at the outset, your first job is to reconstruct it honestly rather than reverse-engineer objectives from the results you got.
Then document the baseline. What was the business trajectory before the plan launched? What external factors were in play during the campaign period? What did competitors do? What happened in the category?
Then assess the three questions above: new audience reach, behaviour change, and commercial return. For each one, be explicit about what you can measure with confidence, what you are estimating, and what you genuinely do not know. Honest approximation is more useful than false precision.
Finally, draw conclusions that are actionable for the next plan. Not “the campaign performed well overall” but “upper-funnel investment drove measurable new audience reach and we should increase it by X%; retargeting spend showed diminishing returns above Y threshold and should be capped.”
Tools that help with this process, from attribution modelling to competitive intelligence, are worth understanding properly. A well-chosen set of growth tools can give you the data infrastructure to make these evaluations more rigorous over time, but the tools are only as good as the questions you bring to them.
For teams building this kind of evaluation into a repeatable process, the go-to-market and growth strategy section of this site covers how to connect planning, execution, and measurement into a coherent commercial framework rather than treating them as separate exercises.
The Role of Agility in Mid-Plan Evaluation
Evaluation does not only happen at the end of a plan. The most commercially effective teams I have worked with build in formal check-points during execution, not to optimise click-through rates, but to assess whether the plan is on track to deliver the business outcome it was designed for.
This requires a different kind of discipline. Mid-plan reviews are often dominated by channel performance data, which is available in real time and feels urgent. The more important question, whether the plan is reaching the right people and changing the right behaviours, is harder to assess in-flight and therefore tends to get less attention.
Forrester’s work on agile marketing highlights the tension between speed and rigour in iterative planning. The teams that manage this well are the ones that have agreed in advance what a meaningful mid-plan signal looks like, and what they will do if they see it. That agreement has to happen before the plan launches, not during the review meeting.
One practical approach is to identify two or three leading indicators that correlate with the lagging commercial outcome you care about. Brand search volume, share of voice in a specific category, or new-to-brand customer acquisition rate can all serve as early signals that the plan is building toward the outcome you want, or that it is not. Monitoring those indicators during the plan gives you something meaningful to react to, rather than just optimising toward whatever the platform dashboards are showing.
What Good Evaluation Changes About the Next Plan
The purpose of evaluating a marketing plan is not to produce a report. It is to make the next plan better. That sounds obvious. In practice, the link between evaluation and planning is often weak, because the people doing the evaluation are not always the people building the next plan, and because organisations tend to move on quickly once a campaign has ended.
The most valuable output of a post-plan evaluation is a set of specific, evidence-based decisions about what to do differently. Not “we should invest more in brand” but “brand awareness among 25-34 year olds in our target geography was 12 points below our nearest competitor at the start of this plan and has not materially improved, which suggests our upper-funnel spend was either insufficient or misallocated.” That is a finding you can act on.
When I was managing large-scale ad spend across multiple clients and categories, the plans that compounded over time were the ones where each evaluation genuinely informed the next brief. The teams that treated post-mortems as a compliance exercise, something to file and move on from, tended to repeat the same mistakes at increasing scale. The teams that treated evaluation as the most important part of the planning cycle got better every year.
For organisations thinking about how evaluation connects to launch planning and market entry decisions, the BCG framework for product launch planning offers a useful structure for thinking about how measurement should be designed before a plan goes live, not retrofitted after it ends.
Similarly, as more brands build creator and partnership activity into their go-to-market plans, evaluating those channels requires the same rigour as any other. Later’s work on creator-led go-to-market campaigns is a useful reference for how to think about measuring creator activity against commercial outcomes rather than just reach and engagement.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
