CMO Spend Survey: Where the Budget Goes
CMO spend surveys tell a consistent story: marketing budgets are under pressure, accountability is rising, and the gap between what CMOs want to invest in and what they can actually defend to the board keeps widening. The annual data points shift year to year, but the structural tensions underneath them rarely do.
What the surveys often miss is the more uncomfortable question: not how much CMOs are spending, but whether the money is being spent in the right places, for the right reasons, with any honest assessment of what it is actually doing.
Key Takeaways
- CMO budgets have been declining as a share of company revenue for several years, even as the remit of the role has expanded significantly.
- The split between brand and performance spend remains one of the most contested decisions in marketing, and most CMOs are still getting it wrong in favour of the measurable over the effective.
- Technology now consumes a disproportionate share of marketing budgets, often without a clear link between the tools being bought and the outcomes being delivered.
- Agencies and internal teams are frequently operating with briefs that are too vague to produce good work, which means a significant portion of spend is wasted before a single ad runs.
- The most valuable thing a CMO can do with a constrained budget is not to spread it more thinly, but to concentrate it where the evidence for impact is strongest.
In This Article
- What CMO Spend Surveys Actually Measure
- The Budget Compression Problem
- The Brand Versus Performance Imbalance
- Where the Technology Budget Is Actually Going
- The Brief Problem Nobody Talks About
- Agency Versus In-House: The Spend Debate That Keeps Shifting
- What the Spend Data Says About Channel Priorities
- How to Read CMO Spend Data Without Being Misled by It
What CMO Spend Surveys Actually Measure
Every year, a handful of respected organisations publish data on how CMOs are allocating their budgets. Gartner’s annual CMO Spend Survey is probably the most cited. It tracks budget as a percentage of company revenue, the split between brand and performance, the proportion going to technology, agencies, and internal teams, and the shifting priorities across channels.
The data is useful. But it is worth being clear about what it measures: reported intentions and allocations, not outcomes. A CMO who says 40% of their budget is going to brand building is not necessarily producing effective brand building. A CMO who reports heavy investment in marketing technology is not necessarily getting value from it. The surveys capture inputs. What happens after the money is committed is a different question entirely.
That distinction matters more than it might seem. When you look at CMO spend data without that caveat, it is easy to treat the numbers as a benchmark for what good looks like. In reality, the averages in any spend survey include a wide range of effectiveness, from genuinely well-allocated budgets to money being spent out of habit, political convenience, or a failure to challenge the status quo.
The Budget Compression Problem
Marketing budgets as a share of company revenue have been declining. This is not a new trend, but it has accelerated in recent years as CFOs have pushed harder for accountability and boards have become more sceptical of marketing spend that cannot be directly tied to commercial outcomes.
I have sat in enough budget reviews to know how this plays out in practice. The CFO wants to see the return on every pound committed. The CMO knows that some of the most important marketing work, the kind that builds brand preference over time, does not produce a clean ROI number in the current quarter. So the CMO either defends the spend poorly, loses it, and shifts further toward performance channels that produce trackable numbers, or they find a way to articulate the commercial logic of longer-term investment. Most end up doing the former.
The result is a slow drift toward what is measurable rather than what is effective. Performance budgets hold up better in budget reviews because they come with numbers attached. Brand budgets get squeezed because the evidence for their impact is harder to present in a spreadsheet. Over time, this creates a structural imbalance that shows up in the spend surveys as a shift toward lower-funnel activity, without anyone having made a deliberate strategic decision to deprioritise brand.
If you are thinking about the broader career context behind these pressures, the Career and Leadership in Marketing hub covers the commercial and organisational dynamics that shape how CMOs operate, including how budget decisions interact with tenure, board relationships, and the measurement problem.
The Brand Versus Performance Imbalance
One of the most persistent findings across CMO spend surveys is the tension between brand and performance investment. For a period, the industry consensus shifted heavily toward performance, driven by the promise of digital attribution and the appeal of spending money only when you could see a direct result.
I spent years overvaluing lower-funnel performance activity. When I was running agency teams focused on paid search and programmatic, the numbers looked compelling. Cost per acquisition was trackable. Return on ad spend was calculable. It felt like accountability. What I eventually came to understand is that much of what performance marketing gets credited for was going to happen anyway. You are often capturing intent that already existed, not creating it.
Think about it this way: if someone has already decided they want to buy a specific product, they will find it. The performance channel that intercepts them at the moment of purchase looks efficient in the data. But the work that made them want the product in the first place, the brand campaign, the word of mouth, the long-term presence in their category, gets no credit in the attribution model because it happened weeks or months earlier and left no trackable digital footprint.
The analogy I keep coming back to is a clothes shop. Someone who walks into a changing room is already ten times more likely to buy than someone browsing the rails. But the job of the shop floor, the display, the window, the brand reputation, is to get people into the changing room in the first place. Optimising only for the changing room moment is a short-term strategy that eventually runs out of people to convert.
CMO spend surveys are beginning to reflect a correction here. There is growing recognition that the balance shifted too far toward performance, and that brand investment needs to recover. But the pace of that recovery is slow, partly because the measurement infrastructure for brand is still less developed than for performance, and partly because CFOs remain more comfortable with numbers they can see in real time.
Where the Technology Budget Is Actually Going
Marketing technology now accounts for a significant portion of CMO budgets, and the trend has been upward for most of the last decade. Martech stacks have grown in complexity, with many large organisations running dozens of tools across CRM, analytics, personalisation, content management, paid media, and data infrastructure.
The problem is not that CMOs are investing in technology. The problem is that a large proportion of that investment is not delivering a clear return. Tools get bought because a vendor made a compelling case, because a competitor was using them, or because the previous CMO committed to a multi-year contract that nobody has reviewed since. The technology budget in many organisations is not a strategic allocation. It is an accumulation of individual decisions made at different points in time, by different people, under different priorities.
When I was running an agency that grew from around 20 people to over 100, one of the most important commercial disciplines we developed was a rigorous annual review of every technology commitment. Not just whether the tool was being used, but whether the outcomes it was supposed to produce were materialising. That discipline is rarer than it should be on the client side. Most CMOs inherit a technology stack they did not choose and do not have the bandwidth to audit properly.
The spend surveys capture the headline number. They do not capture how much of that technology spend is genuinely productive versus how much is sunk cost that nobody has had the time or political capital to challenge.
The Brief Problem Nobody Talks About
There is a category of waste in marketing budgets that almost never appears in spend surveys because it is invisible in the data. It is the waste that happens before a campaign runs, before an agency produces a single piece of work, before a pound of media budget is committed. It is the waste created by bad briefs.
A vague brief produces vague work. A brief that does not articulate a clear commercial objective produces campaigns that look active but achieve nothing. A brief that tries to do too many things at once produces work that does none of them well. I have reviewed hundreds of briefs over the course of my career, and the quality of briefing is consistently one of the weakest links in the marketing process, on both the agency and client side.
The industry spends considerable time discussing the carbon footprint of ad serving and the environmental impact of digital advertising. Those are legitimate concerns. But the strategic waste created by poor briefing, by campaigns that were never properly focused, by creative that did not have a clear job to do, dwarfs the efficiency gains from any carbon reduction initiative. If you want to make marketing more effective and more efficient simultaneously, start with the brief. The downstream impact on budget utilisation is substantial.
This is not a popular observation because it implicates both agencies and clients. Agencies sometimes prefer a vague brief because it gives them creative latitude. Clients sometimes write vague briefs because they have not done the strategic work to know what they actually want. The result is spend that looks legitimate in a budget review but produces little commercial value.
Agency Versus In-House: The Spend Debate That Keeps Shifting
CMO spend surveys consistently track the balance between agency spend and in-house capability investment. The trend over the last decade has been toward bringing more capability in-house, driven by a combination of cost pressure, data ownership concerns, and the belief that internal teams have better context for the business.
Having spent most of my career on the agency side, I have a clear-eyed view of this. The case for in-housing is real in certain areas. Programmatic buying, content production, and some forms of social media management can be done more efficiently and with better data access when they sit inside the organisation. But the case for in-housing everything is not as strong as the trend might suggest.
The most valuable thing a good agency brings is not execution capacity. It is external perspective. An internal team, however talented, is subject to the same organisational assumptions, the same political pressures, and the same cognitive biases as everyone else in the business. The best agency relationships I have seen work because the agency is genuinely willing to challenge the client’s thinking, not just execute what they are told.
The spend surveys show the headline shift. What they do not show is whether the in-housing trend is producing better marketing outcomes, or whether it is simply moving costs from one line on the budget to another while reducing the quality of external challenge that keeps marketing honest.
What the Spend Data Says About Channel Priorities
Channel allocation is one of the most granular areas covered by CMO spend surveys, and it is also one of the most volatile. The relative weighting between paid search, social, display, video, audio, out-of-home, and other channels shifts year to year in response to platform changes, audience behaviour, and the latest thinking on what works.
A few structural observations are worth making. Paid social has grown significantly as a share of digital spend, driven partly by the targeting capabilities of the major platforms and partly by the rise of short-form video as a format. Paid search remains a dominant channel for most categories, though the economics have changed as competition for high-intent keywords has intensified and costs have risen. Connected TV and audio are growing from a smaller base. Out-of-home has recovered strongly since the pandemic period.
What the channel data in spend surveys rarely captures is the interaction effects between channels. The effectiveness of any individual channel is partly a function of what else is running alongside it. A paid search campaign performs better when there is strong brand awareness in the market. A social campaign performs better when the creative has been tested and refined. Treating channels as independent budget lines, which is how most spend surveys present the data, encourages a siloed approach to allocation that underestimates how channels work together.
For CMOs thinking seriously about how budget decisions connect to leadership credibility and commercial outcomes, there is a broader body of thinking worth engaging with. The marketing leadership resources at The Marketing Juice cover the strategic and organisational dimensions of these decisions, not just the tactical ones.
How to Read CMO Spend Data Without Being Misled by It
The most useful thing you can do with CMO spend survey data is treat it as a prompt for questions rather than a source of answers. When a survey tells you that the average CMO is allocating a certain percentage to digital, the right question is not whether your allocation matches that figure. The right question is whether your allocation reflects the specific commercial context of your business, your category, and your customers.
Benchmarking against industry averages is a reasonable starting point. It becomes a problem when it replaces strategic thinking. I have seen marketing teams justify budget decisions entirely on the basis that a competitor or industry average supported the allocation, without ever asking whether the allocation was producing the outcomes the business needed. The survey data becomes a substitute for accountability rather than a tool for improving it.
The most commercially grounded CMOs I have worked with use spend data in a specific way. They look at the averages to understand the range of what their peers are doing. They look at the outliers to understand what different approaches look like. And then they make decisions based on their own evidence about what is working in their specific context, not on what the median CMO is doing.
That approach requires honest measurement, not perfect measurement. Marketing does not need false precision. It needs a genuine attempt to understand which investments are driving commercial outcomes and which are not, and the organisational courage to act on that understanding even when it means challenging established spend patterns.
Having judged the Effie Awards, I have seen the full range of what marketing effectiveness looks like in practice. The campaigns that win are not always the ones with the biggest budgets. They are the ones where someone made a clear decision about what the marketing needed to achieve, built a strategy around that decision, and executed it with discipline. Budget size matters less than budget focus. That observation does not appear in any spend survey, but it is probably the most commercially useful thing you can take from looking at the data.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
