The 70-20-10 Budget Rule: How to Apply It Without Wasting the 10%
The 70-20-10 rule is a marketing budget framework that splits spend into three buckets: 70% on proven, core activity that reliably drives results; 20% on emerging approaches with a reasonable evidence base; and 10% on genuine experimentation where failure is expected and acceptable. It gives budget allocation a structure that most finance directors can understand and most marketing teams can actually execute against.
The framework has been around long enough to feel like received wisdom. Google popularised a version of it for innovation. Marketing teams adopted it for spend allocation. It appears in planning decks, agency proposals, and CFO presentations with enough regularity that it has almost become wallpaper. Which is exactly why most teams apply it wrong.
Key Takeaways
- The 70-20-10 framework only works if your “core 70%” is genuinely proven, not just familiar or comfortable.
- The 10% experimental bucket requires defined failure criteria before spend begins, not post-hoc rationalisation after it fails.
- Most teams treat the 20% as a second core bucket rather than a genuine stepping stone from experiment to scale.
- Budget splits should be reviewed at least quarterly, because a channel that earns 70% allocation this year may not deserve it next year.
- The framework is a discipline tool, not a strategy. It tells you how to allocate, not where to allocate.
In This Article
If you are thinking about how budget decisions fit into a broader operational picture, the Marketing Operations hub covers the systems, processes, and commercial discipline that sit behind effective marketing execution.
What Does Each Bucket Actually Mean?
The labels sound simple. In practice, each bucket requires a specific definition or the whole framework collapses into a way of spending money you were already going to spend anyway.
The 70% is your proven core. These are channels and tactics with a clear, documented performance history in your specific business. Not channels that work for your industry in general. Not channels your agency has always recommended. Channels with your data, your customers, your margins behind them. Paid search for a brand with years of conversion data qualifies. A brand-new programmatic display test from last quarter does not.
I have sat in budget reviews where teams were calling activity “core” that had never been properly measured. The channel had been running for two years, which felt like proof. But two years of activity with no clean attribution, no control period, and no incremental measurement is not evidence. It is habit. The 70% bucket should be uncomfortable to fill honestly, because it forces you to confront what you actually know versus what you assume.
The 20% is where things start getting interesting. This is activity with early signals, a reasonable hypothesis, and enough evidence to justify meaningful investment but not enough to bet the majority of your budget on it. Think of a channel that has run for two or three quarters, is showing positive return but with wide confidence intervals, and needs more volume to prove itself at scale. The 20% bucket is where you build that case.
The 10% is pure experimentation. The expectation here is not that it works. The expectation is that you will learn something useful, and occasionally something in this bucket will perform well enough to graduate to the 20%. If everything in your 10% bucket is working, you are not experimenting. You are just doing more of what you already know.
Where Teams Go Wrong With the 70%
The most common failure mode I see is treating “familiar” as a proxy for “proven.” Teams pour 70% of budget into channels they have always used, without asking whether those channels still earn that allocation.
When I was running an agency and managing large performance budgets across multiple clients, one of the disciplines I tried to instil was zero-based thinking on the core bucket at least once a year. Not zero-based budgeting in the finance sense, but a genuine interrogation: if we were building this plan from scratch today with no legacy commitments, would we still put 70% here? Sometimes the answer was yes. Sometimes it revealed that a channel had been coasting on historical momentum while the market had moved.
Paid search is a good example. For many businesses, it deserves its place in the 70% bucket because the evidence base is strong and the commercial logic is sound. But paid search economics shift. Competition increases, CPCs rise, and the incremental return on the marginal pound of spend changes. A channel that earned 70% allocation three years ago may only deserve 50% today. The framework does not tell you that. Your data does, if you look at it honestly.
There is also a structural problem with how agencies present the 70%. Because agencies are compensated on spend under management, there is an inherent incentive to keep the core bucket large and stable. I have been on both sides of that dynamic. When I was agency-side, I understood the commercial pressure. When I was client-side, I learned to ask harder questions about whether the 70% recommendation was evidence-led or convenience-led. The answer was not always comfortable.
How to Run the 10% Without Wasting It
The 10% experimental bucket is where most teams either get it right or get it badly wrong. Getting it right means treating it like an investment portfolio with defined criteria. Getting it badly wrong means using it as a slush fund for whatever someone read about last week.
Early in my career, when I was still learning what good experimentation looked like, I saw a campaign launch with genuine excitement and almost no measurement infrastructure behind it. The team spent the budget, got some vanity metrics, declared it a success, and moved on. Nothing was learned. Nothing graduated to the 20%. The 10% had been spent without producing any intelligence. That is not experimentation. That is expensive entertainment.
Good experimental spend has three things defined before a single pound leaves the budget. First, a clear hypothesis: what do you believe, and why? Second, a measurement approach: how will you know if the hypothesis was right or wrong? Third, a graduation criterion: what would this channel or tactic need to show to earn a place in the 20% bucket next cycle?
Without those three things, you are not running experiments. You are running pilots with no landing strip.
Influencer marketing is a useful case study here. For many brands, it sits in the 10% bucket because the measurement infrastructure is still immature and the results are highly variable. Proper influencer planning requires the same rigour as any other channel: defined objectives, measurable outcomes, and a clear view of what success looks like before you start. When those elements are in place, influencer spend can graduate from experiment to core. When they are not, it tends to stay in the 10% indefinitely, cycling through new creators without ever building a real evidence base.
The 20% Is Not a Second Core Bucket
This is the subtler failure mode, and it is more common than the obvious ones. Teams treat the 20% as a place to put activity they like but cannot fully justify, rather than as a genuine stepping stone between experiment and scale.
The 20% should be in motion. Things should be graduating into it from the 10%, and things should be graduating out of it into the 70% (or being cut). If your 20% bucket looks exactly the same year after year, something has gone wrong. Either your experiments are not producing learnings, or you are not making the hard call to promote or cut based on what you find.
I saw this clearly when I was helping a business rationalise its channel mix after a period of rapid growth. The team had accumulated a collection of channels in the 20% range that had been there for two or three years. Nobody had made the call to promote them or kill them. They had become comfortable fixtures. When we looked at the data properly, some deserved to move up. Some deserved to be cut entirely. But the absence of a review process had let them drift, consuming budget that could have been deployed more decisively.
The discipline of the 20% is not just about what you put there. It is about how frequently you review it and whether you are willing to act on what you find. How your marketing team is structured affects this directly. If nobody owns the decision to graduate or cut a channel, it will not happen consistently.
How to Set the Splits for Your Specific Business
The 70-20-10 split is a starting point, not a law. The right proportions depend on where your business is in its lifecycle, how much commercial pressure you are under, and how mature your measurement infrastructure is.
A business in its first two years of operation may not have enough proven channels to justify 70% in the core bucket. If you only have six months of data on your best-performing channel, your core allocation should probably be lower and your experimental allocation higher. The framework needs to reflect your actual evidence base, not an aspirational one.
A mature business with years of clean performance data and stable channel economics can lean more heavily on the 70% with confidence. But even then, the review cycle matters. Markets shift, platforms change their algorithms, privacy regulations tighten. Privacy and data changes have already forced significant reallocation across digital channels for many businesses, and that pressure is not going away. What earned a place in the 70% two years ago may need to be reclassified as the measurement environment changes.
The other variable is risk tolerance, which is really a proxy for commercial pressure. A business under significant revenue pressure needs its 70% to be genuinely reliable. This is not the time to be generous with the definition of “proven.” Conversely, a business with strong margins and a long planning horizon can afford to be more aggressive with the 10% and more patient with the 20%.
Early in my career, I had a moment that crystallised this for me. I launched a paid search campaign for a music festival, and within roughly a day it had generated six figures of revenue from a relatively simple setup. The channel was new to me at the time, but the hypothesis was sound, the measurement was clean, and the results were unambiguous. That campaign graduated from experiment to core allocation faster than anything I had seen before, because the evidence was there immediately. Not every experiment gives you that kind of signal that quickly. But when it does, you need a framework that lets you act on it rather than waiting for the next annual planning cycle.
Making the Framework Operational
A budget framework is only as useful as the process it sits inside. If the 70-20-10 split is decided once a year in a planning meeting and never revisited, it will drift out of alignment with reality within a quarter.
The minimum viable process looks like this. Set the initial split at the start of the planning cycle with clear definitions for what qualifies for each bucket. Review the 10% at every monthly performance meeting: what did you learn, and does anything need to graduate or be cut? Review the 20% quarterly: is anything ready to move up, and is anything that has been here too long without progress worth cutting? Review the 70% annually, or when there is a significant market shift that might change the evidence base.
The inbound marketing process that sits behind most digital strategies benefits from this kind of structured thinking about where to invest and how to evaluate what is working. Understanding how inbound channels compound over time is directly relevant to how you classify them across the three buckets.
One thing I would add from experience: the 10% bucket needs a named owner. Not a committee. Not a working group. One person who is accountable for defining the experiments, running the measurement, and reporting the findings. When accountability is diffuse, experimental spend tends to become undisciplined. When one person owns it, you get cleaner hypotheses, better measurement, and harder decisions about what to do with the results.
The same applies to the graduation decisions. Who has the authority to move a channel from 10% to 20%, or from 20% to 70%? If that decision requires a committee sign-off every time, it will not happen at the pace the framework requires. Build the decision rights into the process, not just the budget split.
What the Framework Cannot Do
The 70-20-10 rule is a discipline tool. It tells you how to allocate budget across a risk spectrum. It does not tell you which channels belong in which bucket. It does not tell you how much total budget to spend. It does not replace channel strategy, audience insight, or commercial judgement.
I have seen teams use the framework as a substitute for strategic thinking rather than a complement to it. They fill the buckets with whatever they were already planning to do, label it 70-20-10, and present it as a disciplined approach. The structure is there. The thinking is not.
The framework also does not solve measurement problems. If you cannot measure whether your core channels are actually delivering incremental return, the 70% allocation is based on assumption rather than evidence. Improving your measurement infrastructure is a prerequisite for the framework to mean anything. Without it, you are just distributing budget with a more sophisticated-sounding rationale.
Forrester has written about the gap between marketing planning ambition and operational reality, and it is a real gap. Transforming planning from reactive to structured requires more than a framework. It requires the data infrastructure, the internal processes, and the commercial discipline to act on what the framework reveals.
That is not a reason to abandon the framework. It is a reason to be honest about what you are actually measuring before you decide what goes where.
For more on the systems and disciplines that make marketing budgets work in practice, the Marketing Operations hub covers the operational layer that sits behind effective budget management, from planning processes to performance reporting.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
