Market Sizing: Stop Guessing, Start Calculating
Market sizing is the process of estimating the total addressable demand for a product or service, typically expressed as a revenue figure or customer count. Done well, it gives you a defensible foundation for budget decisions, go-to-market planning, and investment cases. Done badly, it gives you a number that sounds impressive in a deck but collapses the moment anyone asks how you got there.
Most market sizing is done badly. Not because marketers lack the tools, but because the incentives push toward optimism rather than accuracy. That is worth fixing.
Key Takeaways
- TAM, SAM, and SOM are only useful when each figure is built from real assumptions, not reverse-engineered from a desired outcome.
- Top-down and bottom-up sizing produce different numbers for good reasons. Running both and reconciling the gap is where the real insight lives.
- Market size is a range, not a point. Presenting a single number without bounding assumptions is a red flag in any planning document.
- The most common market sizing error is confusing the total market with the reachable market. They are rarely the same figure.
- A market sizing exercise that does not change at least one strategic decision was probably not worth doing.
In This Article
- Why Market Sizing Gets Treated as a Formality
- What TAM, SAM, and SOM Actually Mean
- Top-Down vs Bottom-Up: Which Method to Use
- The Data Sources That Are Actually Worth Using
- How to Build Assumptions That Hold Up to Scrutiny
- Common Market Sizing Errors and How to Avoid Them
- How Market Sizing Should Connect to Go-to-Market Decisions
- Sizing Emerging Markets Where Data Is Thin
- What a Good Market Sizing Document Actually Looks Like
Why Market Sizing Gets Treated as a Formality
I have sat in enough planning sessions to know what usually happens. Someone is asked to size a market. They find an industry report, pull a headline number, apply a rough percentage, and call it done. The number goes into a slide. Nobody interrogates it. The plan gets approved. Eighteen months later, the revenue targets are missed and everyone is surprised.
The problem is that market sizing is treated as a box to tick rather than a question to answer. It appears in business cases because investors and finance teams expect it, not because the person building the plan genuinely used it to make a decision. That is the wrong order of operations.
When I was running agency growth planning, we would sometimes inherit client briefs that included market size figures from third-party reports. The numbers looked authoritative. They had decimal points and source citations. But when we traced them back, they were often based on survey data from a different geography, a different product category, or a different point in time. The decimal points were doing a lot of work to disguise a fairly weak foundation.
Good market sizing starts with a different question. Not “how big is the market?” but “how big is the market for us, right now, given our actual constraints?” That reframe changes everything about how you approach the exercise.
If you want to build stronger market intelligence habits across your planning work, the Market Research and Competitive Intel hub covers the broader landscape of tools and methods that sit alongside market sizing.
What TAM, SAM, and SOM Actually Mean
The TAM, SAM, SOM framework is the standard starting point for market sizing, and it is genuinely useful when applied correctly. Most people know the acronyms. Fewer people use them with any precision.
Total Addressable Market is the total revenue opportunity if you captured 100% of the market with no constraints. It is a theoretical ceiling, not a planning target. It is useful for understanding the scale of a category, but it is almost never a number your business will approach.
Serviceable Addressable Market is the portion of TAM you can realistically serve given your current product, geography, and distribution. If you sell accounting software to small businesses in the UK, your SAM is not the global enterprise software market. It is the subset of businesses that match your product, in the markets you can reach, at the price point you can support.
Serviceable Obtainable Market is the share of SAM you can realistically capture given your competitive position, sales capacity, and marketing budget over a defined time horizon. This is the number that should drive your revenue targets, your headcount plan, and your channel investment.
The error most teams make is spending too much time on TAM and too little time on SOM. TAM is the easy number. It is big, it impresses stakeholders, and it requires almost no rigour to produce. SOM is the hard number. It requires you to be honest about your constraints, your competitive position, and your execution capacity. That honesty is uncomfortable, which is why it gets avoided.
BCG has written usefully about the relationship between data rigour and strategic decision-making, and the same principle applies here. Their work on bridging the trust gap in data and analytics is a reminder that the quality of your inputs determines the quality of your outputs, regardless of how sophisticated your framework looks.
Top-Down vs Bottom-Up: Which Method to Use
There are two primary methods for building a market size estimate, and the most useful thing you can do is run both.
Top-down sizing starts with a macro figure, typically from an industry report, government data, or trade body, and works down through a series of filters to arrive at your addressable segment. You might start with total UK retail spend, filter by category, filter by channel, and filter by your target customer profile. Each filter reduces the number. The final figure is your estimated market.
The advantage of top-down is speed. The disadvantage is that you are only as good as your source data, and most source data has hidden assumptions baked in. Industry reports are often based on surveys with limited sample sizes, extrapolated to look more authoritative than they are. The filters you apply also introduce compounding error. If each filter is slightly off, the final number can be significantly wrong.
Bottom-up sizing starts from the customer level. You estimate the number of potential customers who match your target profile, multiply by average transaction value or annual spend, and build up to a total. This approach forces you to be specific about who you are actually selling to, which is its main advantage.
When I was working on growth strategy for a performance marketing business, we were trying to size the opportunity in a specific vertical. The top-down number from available reports looked healthy. When we built it from the bottom up, using actual customer data, conversion rates, and realistic sales cycle assumptions, the number was about 40% smaller. That gap was not a problem. It was information. It told us that the market was real but that our planning needed to reflect actual constraints, not theoretical ones.
Running both methods and reconciling the difference is where the real analytical work happens. If your top-down and bottom-up estimates are close, you have reasonable confidence. If they diverge significantly, you have a question worth investigating before you commit budget to a plan built on one of them.
The Data Sources That Are Actually Worth Using
Market sizing is only as good as the data you feed into it. The sources most commonly cited are not always the most useful.
Paid industry reports from firms like Forrester, Gartner, or IBISWorld have their place. They provide useful category-level context and can anchor a top-down estimate. But they are expensive, often lag the market by 12 to 18 months, and their methodology is rarely fully transparent. Use them as one input, not as the authoritative answer.
Government and official data sources are underused. National statistics agencies, Companies House filings, sector regulators, and trade body publications often contain more granular and more reliable data than commercial reports. They are slower to produce but more methodologically rigorous. If you are sizing a market in a regulated industry, the regulator’s own data is usually the best starting point.
Your own transactional data is the most valuable source you have. If you already operate in a market, your win rates, average deal sizes, sales cycle lengths, and customer concentration data tell you more about your realistic SOM than any external report. Most teams underuse this. They reach for external validation when the most useful signal is sitting in their CRM.
Competitor analysis can also be a useful proxy. If a competitor is publicly listed, their revenue disclosures and investor presentations give you a real data point about what is achievable in the market. If they are private, Companies House filings, job posting volumes, and advertising spend data can give you a directional read. Forrester’s work on sales productivity is a reminder that the gap between market potential and actual capture is often an execution problem as much as a sizing problem.
Search volume data is a useful but imperfect proxy for demand. Keyword research tools can give you a sense of how many people are actively looking for a solution in your category. The limitations are well known: not all demand is search-expressed, and search volume reflects current awareness, not latent demand. But as a directional signal, it is free, current, and granular in ways that industry reports are not.
How to Build Assumptions That Hold Up to Scrutiny
The assumptions behind a market size estimate matter more than the methodology you use to build it. A well-structured bottom-up model with weak assumptions will produce a worse answer than a simple top-down estimate with well-evidenced filters.
Every assumption in a market sizing exercise should be documented, sourced, and stress-tested. That sounds obvious. In practice, most market sizing documents contain undocumented assumptions that the author has forgotten making by the time anyone reviews the work.
The discipline I have found most useful is to separate assumptions into three categories: those you are confident about because they are based on your own data, those you are reasonably confident about because they are based on credible external sources, and those you are uncertain about because they are estimates or proxies. The third category is where your sensitivity analysis should focus.
Sensitivity analysis means asking: if this assumption is wrong by 20%, what happens to the final number? If a 20% change in one assumption produces a 50% change in the market size estimate, that assumption is load-bearing and needs more rigour. If the final number is relatively stable across a range of assumption values, you have more confidence in the estimate.
Present your market size as a range, not a point. A range of, say, £80m to £120m is more honest and more useful than a single figure of £95m. The range communicates uncertainty explicitly, which is more useful to decision-makers than false precision. Anyone who has managed a P&L knows that the plan rarely hits the single-point forecast. A range forces a more honest conversation about the scenarios you are planning for.
BCG’s thinking on rethinking innovation systems touches on a related point: the value of a strategic exercise comes from the quality of the thinking it generates, not the precision of the output. A market sizing process that forces you to interrogate your assumptions is valuable even if the final number is approximate.
Common Market Sizing Errors and How to Avoid Them
Some errors in market sizing are technical. Most are behavioural.
The most common technical error is confusing addressable market with total market. The total market for a category includes customers who will never buy from you, in geographies you cannot serve, at price points you cannot match, through channels you do not operate. Addressable means reachable and convertible, not theoretically possible. Conflating the two produces TAM estimates that are flattering but useless.
A second common error is ignoring market concentration. In most B2B markets, a relatively small number of customers account for a disproportionate share of total spend. If your model assumes an even distribution of spend across all potential customers, it will overestimate your opportunity in the long tail and underestimate the competitive intensity at the top of the market.
The most common behavioural error is working backwards from a desired outcome. I have seen this more times than I can count. A business needs to justify a certain level of investment. The market size required to make the investment case work is calculated first. The assumptions are then chosen to produce that number. The result looks like analysis but is actually advocacy dressed up as analysis. It is worth being honest with yourself about whether you are doing this.
Another behavioural error is treating market size as static. Markets grow, contract, fragment, and consolidate. A market size estimate that was accurate two years ago may be significantly wrong today. The shift from physical to digital directories is a useful historical reminder: businesses that sized their market based on the Yellow Pages model were not wrong about the past, they were wrong about the trajectory. The consolidation of the directory market illustrates how quickly a market structure can shift when the underlying distribution model changes.
Finally, many market sizing exercises fail to account for market penetration rates. Even in a well-defined addressable market, not every potential customer will buy in any given year. Purchase cycles, switching costs, and category inertia all reduce the realistic annual opportunity. A market that looks large on a total basis may have a much smaller annual conversion pool than the headline figure suggests.
How Market Sizing Should Connect to Go-to-Market Decisions
Market sizing is not a standalone exercise. It should connect directly to how you allocate budget, structure your sales team, prioritise channels, and set revenue targets. If it does not change at least one decision, it was probably not worth doing.
The most direct connection is to channel investment. If your SOM analysis suggests that your realistic addressable market in a given segment is £5m annually, and your current cost of acquisition implies you need to spend £2m to capture it, the economics may not support the investment. That is a useful output. It tells you either to reduce your cost of acquisition, to expand the segment definition, or to look at a different market. None of those decisions can be made without a credible SOM estimate.
Early in my career, I worked on a paid search campaign for a music festival. The results were fast and the revenue numbers were significant relative to the spend. But the more interesting question, which we did not fully interrogate at the time, was how much of that revenue was genuinely incremental and how much was capturing demand that would have found its way to us through other channels regardless. Market sizing forces that question. It makes you think about whether you are growing the pool or just fishing more efficiently in the existing one.
Market sizing also connects to sales team structure. If your analysis shows that your SAM is concentrated in a relatively small number of high-value accounts, an enterprise sales model with dedicated account managers may be more appropriate than a high-volume inside sales operation. If the market is fragmented across thousands of small customers, the economics point in the opposite direction. The market structure, not just the market size, should inform the go-to-market model.
The connection to pricing is also worth noting. A market sizing exercise that includes analysis of willingness to pay, not just volume, gives you a more complete picture of the opportunity. A large market at low price points may be less attractive than a smaller market at higher price points, depending on your cost structure. Size alone is not the answer. Size multiplied by achievable margin is closer to the right question.
Sizing Emerging Markets Where Data Is Thin
The frameworks above work reasonably well for established categories with available data. They are harder to apply when you are sizing an emerging market where the category is new, the customer behaviour is not yet established, or the competitive landscape is still forming.
In these situations, analogy-based sizing is often the most practical approach. You identify a comparable market, either in a different geography where the category is more mature, or in an adjacent category with similar dynamics, and use that as a proxy for where your market might go. The limitations are obvious: analogies are imperfect, and market dynamics rarely transfer cleanly from one context to another. But a well-reasoned analogy is more useful than a number pulled from a thin data set.
Scenario planning becomes more important when data is thin. Rather than trying to produce a single estimate, you build three scenarios: a conservative case based on slow adoption, a base case based on reasonable assumptions about category growth, and an upside case based on faster-than-expected adoption. Each scenario should have a defined set of assumptions and a clear description of what would need to be true for it to materialise. This gives decision-makers a more honest picture of the uncertainty involved.
Primary research becomes more valuable in thin-data markets. Talking directly to potential customers, even in small numbers, can give you qualitative signal about willingness to pay, switching barriers, and purchase triggers that no secondary source will provide. It is not statistically representative, but it is grounded in real customer behaviour rather than extrapolated from aggregate data.
The broader discipline of market research, including competitive analysis, customer segmentation, and demand sensing, provides the surrounding context that makes market sizing more reliable. The Market Research and Competitive Intel hub covers these methods in more depth, and they are worth treating as complementary rather than separate from the sizing exercise itself.
What a Good Market Sizing Document Actually Looks Like
Most market sizing outputs are either a single slide with a large number and a source citation, or a sprawling spreadsheet that nobody outside the person who built it can interpret. Neither is particularly useful.
A good market sizing document is concise, transparent about its assumptions, and structured to support a decision. It should contain: a clear statement of what market is being sized and over what time horizon, the methodology used, the key assumptions with their sources and confidence levels, the resulting estimate expressed as a range, a brief sensitivity analysis showing how the estimate changes under different assumptions, and a clear statement of the decision this analysis is intended to inform.
That last element is the one most often missing. A market sizing document that does not state what decision it is meant to support is likely to be used selectively, cited when it supports the preferred conclusion and ignored when it does not. Anchoring the analysis to a specific decision forces intellectual honesty about whether the output is actually useful.
The document should also include a section on what you do not know. The gaps in your data, the assumptions you are least confident about, and the market dynamics that could materially change the estimate. This is not a sign of weakness in the analysis. It is a sign of rigour. Decision-makers who understand the limits of an estimate are better positioned than those who are given false confidence by a number presented without caveats.
I learned this the hard way early in my career. I asked for budget based on a market opportunity I had sized quickly and presented with more confidence than the underlying data warranted. The budget was approved. The opportunity turned out to be smaller than I had estimated because I had conflated total market with addressable market. The lesson was not that market sizing is unreliable. It was that the quality of the decision depends on the quality of the analysis, and that presenting uncertainty honestly is more useful to everyone than presenting false precision.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
