Market Comparable Analysis: What the Numbers Are Telling You
Market comparable analysis is the practice of benchmarking your brand, product, or pricing against direct and indirect competitors to understand where you sit in the market and what the gaps mean commercially. Done well, it gives you a defensible foundation for positioning decisions, pricing strategy, and budget allocation. Done poorly, it gives you a spreadsheet that feels rigorous but tells you nothing useful.
The difference between the two is not the data. It is the questions you ask before you start collecting it.
Key Takeaways
- Market comparable analysis only produces useful output when the comparison set is chosen deliberately, not by convenience or assumption.
- Pricing comparables without context, such as brand equity, audience overlap, or product scope, routinely lead to under-pricing or defensive over-pricing.
- The most revealing comparables are often indirect competitors that your customers are actively choosing instead of you.
- A comparable analysis should produce a commercial decision, not just a positioning map. If it ends as a slide, it has not done its job.
- Frequency matters: a comparable analysis run once at launch decays quickly. Markets move, and so do the players in them.
In This Article
- Why Most Comparable Analyses Produce Comfortable Lies
- How Do You Choose the Right Comparison Set?
- What Attributes Should You Actually Compare?
- How Do You Avoid the Trap of Surface-Level Comparisons?
- What Does a Useful Comparable Analysis Actually Produce?
- How Often Should You Run a Market Comparable Analysis?
- The Commercial Discipline Behind Good Comparable Analysis
Why Most Comparable Analyses Produce Comfortable Lies
I have sat in more strategy sessions than I can count where someone presents a competitor matrix and the room nods along. The brand is positioned in the top-right quadrant. The competitors are clustered in the middle. Everyone feels good about it. Then nothing changes, because the analysis was designed to confirm what the team already believed.
This is the central problem with market comparable analysis as it is typically practised. The comparison set is chosen by the marketing team based on who they think they compete with. The attributes being compared are chosen based on what the brand does well. The output is a map that flatters the client or the business. It is strategy theatre dressed up as research.
The honest version starts with a different question: who does your customer consider when they are making the decision you want them to make? That is not always who you think. When I was running agency growth at iProspect, we spent a lot of time helping clients understand that their real competitive set was not always the obvious one. A mid-market SaaS company might assume it competes with two or three named platforms, but its actual competition includes spreadsheets, internal resource, and doing nothing. If your comparable analysis does not include those alternatives, it is missing the most important competitive pressure of all.
If you want a broader framework for how market research should inform strategic decisions, the Market Research and Competitive Intel hub covers the full picture, from audience analysis through to competitor intelligence.
How Do You Choose the Right Comparison Set?
This is where most analyses go wrong before they begin. The instinct is to list the brands you know, the ones that come up in sales calls or that your leadership team mentions. That is a starting point, not a methodology.
A more rigorous approach works across three layers. The first is direct competitors: brands offering a similar product or service to a similar audience at a similar price point. These are the obvious ones, and they matter, but they are also the ones your customers are most aware of and most likely to be evaluating you against explicitly.
The second layer is indirect competitors: brands solving the same underlying problem in a different way. If you sell project management software, your indirect competitors include email, shared documents, and whiteboards. If you sell premium gym memberships, your indirect competitors include home equipment brands, running apps, and the decision to simply not exercise. These alternatives are often more powerful competitive forces than your direct rivals, because customers do not always frame the decision the way you do.
The third layer is aspirational comparables: brands that your target audience also considers, even if they are not technically in your category. This is particularly relevant for positioning and pricing. If your customers regularly evaluate you against a brand that is more premium, that is a signal. If they consistently choose a cheaper alternative, that is a different signal. Both are commercially important.
The cleanest way to validate your comparison set is to ask customers directly. Not “who do you consider our competitors?” but “when you were deciding whether to buy from us, what else did you consider?” The answers are often surprising. Tools like Hotjar can surface this kind of qualitative signal at scale through on-site surveys and session data, which gives you a behavioural layer to sit alongside the strategic one.
What Attributes Should You Actually Compare?
The attributes you choose to compare determine what conclusions are possible. Choose the wrong ones and the analysis becomes a vanity exercise. Choose the right ones and it becomes a decision-making tool.
There are broadly four categories of attributes worth examining in a market comparable analysis.
The first is commercial positioning: price point, pricing model, packaging, and the implied value proposition at each tier. This is the most directly actionable layer of comparable analysis. If your pricing is out of step with the market, you need to know whether that is a deliberate premium strategy or an accidental one. I have worked with businesses that were underpriced relative to their competitive set by a significant margin, not because they had made a strategic choice to compete on price, but because no one had looked at the comparables recently enough to notice the market had moved.
The second is product or service scope: what is included, what is excluded, and what requires an upgrade or add-on. This is particularly important in categories where feature parity is assumed and differentiation happens at the margins. If every competitor offers a standard set of features and you are paying to develop something that is already table stakes, that is a resource allocation problem disguised as a product decision.
The third is brand and messaging: how competitors position themselves, what language they use, what emotions or outcomes they lead with, and where they are visible. This is the layer most marketing teams spend the most time on, and it is genuinely useful, but it needs to be grounded in what customers actually respond to rather than what looks good on a positioning map. Copyblogger has written extensively on how messaging choices affect conversion, and the same principles apply when you are mapping competitor messaging: the words that work are the ones that reflect how customers already think about the problem.
The fourth is channel and media presence: where competitors are investing attention, what formats they use, and what their share of voice looks like across paid, owned, and earned. This is where tools like SEMrush become useful for understanding organic visibility and content strategy, though any tool-based analysis gives you a snapshot rather than a trend. Always ask what has changed, not just what is true right now.
How Do You Avoid the Trap of Surface-Level Comparisons?
The most common failure mode in market comparable analysis is comparing outputs without understanding inputs. You see a competitor running heavy paid social spend and assume it is working. You see a competitor offering a lower price and assume they are winning on it. You see a competitor with a large content library and assume it is driving growth. None of those assumptions are necessarily true.
I judged the Effie Awards for a period, which gives you an unusual vantage point. You see campaigns that looked impressive from the outside, with big budgets and high visibility, that produced almost no commercial return. You also see campaigns that looked modest, even scrappy, that drove measurable business outcomes. The external view of a competitor’s activity tells you very little about whether it is working. You need to look for signals of commercial effectiveness, not just marketing activity.
Some of those signals are available if you look for them. Revenue growth, headcount changes, funding rounds, pricing adjustments, and product launches are all proxies for whether a competitor’s strategy is producing results. Customer review data is underused in comparable analysis; it tells you what customers actually value and what they find frustrating, which is far more useful than a competitor’s own marketing claims. Optimizely’s research on retail experimentation makes the point that observed behaviour consistently outperforms stated preference, and the same logic applies when you are trying to understand what is working for competitors.
The LEGO turnaround is a useful reference point here. When LEGO was in trouble in the early 2000s, the comparable analysis they needed was not a map of toy competitors. It was an understanding of how children were spending their time and attention more broadly. BCG’s profile of that period captures how the business had to reframe its competitive set entirely before it could find a credible path forward. The lesson is that the right comparison is not always the obvious one.
What Does a Useful Comparable Analysis Actually Produce?
This is the question I always push teams on. If the output of your comparable analysis is a slide deck, it has not done its job. A comparable analysis should produce a decision, or at minimum a set of clearly defined options with the trade-offs made explicit.
The most commercially useful outputs tend to fall into a few categories. A pricing decision: based on where you sit in the market and what the comparables suggest about customer expectations, should you adjust your pricing, your packaging, or your value communication? A positioning decision: is there a gap in the market that is genuinely underserved, or are you proposing to enter a space that is already crowded with well-resourced competitors? A channel decision: where are competitors investing and where are they not, and what does that tell you about where attention is available or where costs are likely to be high?
Early in my career, I built a website because the MD said no to the budget for one. That experience taught me something useful about comparable analysis: the constraint is often where the insight lives. When you cannot do what your competitors are doing, you are forced to look for what they are not doing. That is frequently where the better opportunity is anyway.
At lastminute.com, we ran a paid search campaign for a music festival that generated six figures of revenue in roughly a day. It was not a sophisticated campaign by any modern standard. What made it work was that we understood the competitive landscape well enough to know where demand existed and where it was not being captured. The comparable analysis was implicit, but it was real. We knew what alternatives customers had, we knew what they were being offered elsewhere, and we knew there was a gap between demand and supply in that specific moment. That is what comparable analysis is supposed to surface.
How Often Should You Run a Market Comparable Analysis?
More often than most businesses do, and with more structure than most businesses apply.
A comparable analysis run once at launch and never revisited is a historical document, not a strategic tool. Markets move. Competitors enter and exit. Pricing norms shift. Customer expectations evolve. The analysis that was accurate eighteen months ago may be actively misleading today.
A practical cadence for most businesses is a light-touch quarterly review of pricing and messaging comparables, with a more thorough structural analysis annually or when a significant market event occurs. Significant events include a major competitor raising a large funding round, a new entrant with a materially different model, a platform or channel change that affects the whole category, or a shift in customer behaviour that your own data is picking up.
The quarterly review does not need to be exhaustive. It needs to answer three questions: has anything changed in the competitive set? Has anything changed in how competitors are positioning or pricing? Is there anything in our own performance data that suggests the market has moved in a way we have not accounted for? If the answers are no, no, and no, the review took thirty minutes and confirmed that your current strategy is still calibrated correctly. That is a useful output.
Search visibility data is a useful input to these reviews. Search Engine Journal has covered how algorithmic shifts can change competitive visibility quickly, and tracking where competitors are gaining or losing organic ground is a low-cost signal of where they are investing and what is working for them.
The Commercial Discipline Behind Good Comparable Analysis
The businesses that do comparable analysis well tend to share a common characteristic: they treat it as a commercial function, not a marketing function. The questions driving the analysis are about revenue, margin, and competitive positioning in the customer’s decision process, not about brand perception or creative differentiation in isolation.
That commercial grounding changes what you look for and what you do with what you find. When I was growing a team from 20 to over 100 people at iProspect, one of the things that shaped our competitive positioning was understanding not just what other agencies offered, but what clients were actually paying for and what they felt they were not getting. The comparable analysis was not a map of agency capabilities. It was an understanding of the gap between what the market was supplying and what clients actually valued. That gap was where we built.
That is the discipline worth applying to any market comparable analysis. Not “where are we relative to competitors on the attributes we have chosen to compare?” but “what does the customer’s decision process reveal about where value is being created and where it is not?” The first question produces a positioning map. The second question produces a strategy.
For more on how competitive intelligence fits into a broader research framework, the Market Research and Competitive Intel hub covers the methods and tools that make this kind of analysis more reliable and more actionable over time.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
