Competitive Set Analysis: How to Define Who You’re Competing Against
Competitive set analysis is the process of identifying which companies or products a customer would consider as alternatives to yours, and then systematically evaluating how each option compares across dimensions that matter to the buying decision. Done well, it tells you where you genuinely win, where you lose, and where the market is underserved. Done badly, it produces a slide deck full of logos and a false sense of strategic clarity.
Most competitive analysis fails at the first step: defining the set itself. Companies default to whoever shows up in a Google search or whoever sales mentioned last quarter. That is not a competitive set. That is a list of names. The distinction matters because every strategic decision that follows, from positioning to pricing to channel mix, depends on who you are actually measuring yourself against.
Key Takeaways
- Your competitive set should be defined by customer consideration, not internal assumptions. Who does your buyer evaluate alongside you is the only question that matters.
- Most competitive analyses are too narrow. Indirect competitors and non-consumption are often bigger threats than the obvious category rivals.
- Competitive set analysis is most valuable when it informs positioning decisions, not when it produces a feature comparison matrix nobody reads.
- The set changes over time. A static competitive map built once becomes a liability as markets shift, new entrants arrive, and customer behaviour evolves.
- Competitive intelligence without a commercial decision attached to it is just research. The output should always connect to a specific go-to-market choice.
In This Article
- Why Most Competitive Sets Are Drawn Too Narrowly
- How to Define Your Competitive Set Correctly
- A Worked Example: Competitive Set Analysis in Practice
- What to Actually Analyse Once You Have the Right Set
- The Positioning Output: What the Analysis Should Produce
- Common Mistakes That Undermine the Analysis
- How Competitive Set Analysis Connects to Channel and Budget Decisions
- Making Competitive Analysis a Repeatable Process, Not a One-Off Project
Why Most Competitive Sets Are Drawn Too Narrowly
When I was running iProspect, we competed on paper against a small number of named performance agencies. That was the frame most pitches used. In practice, we were competing against in-house teams, against consultancies offering a broader remit, against client indecision, and occasionally against the CFO’s preference to cut the budget entirely rather than appoint anyone. None of those appeared on a competitive landscape slide.
The same problem appears in almost every category. A B2B SaaS company maps its competitive set as the other SaaS tools in the same category. But the customer’s real decision is often between buying the software and hiring someone to do it manually, or between buying now and waiting another year. Those are competitive options too, and they require different positioning responses.
Theodore Levitt’s point about railroads failing because they thought they were in the railroad business rather than the transportation business is well worn, but it holds. The competitive set for a train operator in 1960 was not just other train operators. It was cars, planes, and the decision not to travel at all. Narrowing the frame to obvious category rivals is comfortable. It is also often wrong.
A useful competitive set has three layers. Direct competitors are the companies offering the same product or service to the same customer segment. Indirect competitors solve the same problem a different way. And substitutes include doing nothing, doing it internally, or solving the problem with a completely different category of solution. Most analyses only map the first layer.
How to Define Your Competitive Set Correctly
Start with the customer, not the category. The question is: when your target buyer is evaluating whether to purchase your product, what else are they seriously considering? That framing shifts the exercise from internal categorisation to external reality.
There are several ways to get that data. Win/loss interviews are the most direct. When a prospect chose someone else, who did they choose and why? When they chose you, what else were they looking at? Sales teams carry this intelligence informally but rarely surface it in structured form. Building a simple win/loss log, even a basic one, gives you a more honest competitive map than any analyst report.
Customer interviews add a different dimension. Existing customers can tell you what they were using before, what they considered switching to, and what would make them leave. That is competitive intelligence that most companies never collect because they are too focused on acquisition to pay attention to retention signals. I have sat in enough client reviews to know that churn data, when you actually interrogate it, often reveals competitive threats that nobody on the marketing team was tracking.
Search behaviour is a useful proxy for consideration. The queries that appear alongside branded searches, the comparison searches customers run, and the “X vs Y” content that ranks in your category all reflect how buyers are framing their options. Tools like Semrush can surface the competitive landscape at a keyword level, which gives you a data-grounded view of who buyers are evaluating alongside you.
Review platforms are another source. G2, Capterra, Trustpilot, and category-specific review sites show you which alternatives reviewers mention, which products get compared in the same breath, and where the stated reasons for switching cluster. This is not perfect data, but it is real customer language, which is more valuable than internal assumptions dressed up as research.
A Worked Example: Competitive Set Analysis in Practice
Take a mid-market project management software company. The obvious competitive set is Asana, Monday.com, Wrike, and a handful of other named tools. That is where most analyses stop.
But the win/loss data tells a different story. A significant portion of lost deals go to Microsoft Teams and SharePoint, not because those tools are better at project management, but because the IT department already has them and does not want another vendor. Another portion of lost deals go to spreadsheets, not because the prospect preferred spreadsheets, but because the budget did not get approved. A smaller but meaningful portion go to a decision to hire a project coordinator instead of buying software.
Now the competitive set looks different. You still need to win against Asana and Monday.com on features and pricing. But you also need a positioning response to the “we already have Microsoft” objection, a commercial response to the budget barrier, and potentially a content strategy that speaks to buyers who are weighing software against headcount. Those are three entirely different competitive problems, and they require different go-to-market responses.
This is the kind of strategic clarity that competitive set analysis should produce. Not a grid with green ticks and red crosses, but a clear-eyed view of the decision landscape your buyer is handling and where you need to compete harder, position differently, or stop trying to win.
If you want to see how this connects to broader go-to-market decisions, the Go-To-Market and Growth Strategy hub covers the full strategic context, from segmentation and positioning through to channel selection and launch planning.
What to Actually Analyse Once You Have the Right Set
Once the set is correctly defined, the analysis itself needs to be built around the dimensions that drive the buying decision, not the dimensions that are easiest to measure. Feature matrices are popular because they are easy to build. They are also often irrelevant because buyers rarely choose on features alone.
The dimensions worth analysing fall into a few categories. Positioning and messaging: what is each competitor claiming, who are they claiming it for, and where are the gaps? Pricing and commercial model: how does each competitor structure its offer, and where does your pricing create or destroy perceived value? Distribution and channel: how do competitors reach buyers, and where are they absent? Reputation and trust signals: what do customers say, what do third-party sources say, and where do trust gaps exist? And finally, strategic trajectory: where is each competitor investing, what are they building toward, and what does that imply for where the market is heading?
The BCG framework for scaling strategic capability is relevant here because competitive analysis is not a one-time project. It needs to be embedded in how the organisation thinks and plans, not produced as a quarterly deliverable that nobody reads after the first week.
I judged the Effie Awards for several years. One of the consistent patterns in winning entries was that the brand had a precise understanding of the competitive context it was operating in. Not a comprehensive analysis of every competitor, but a clear view of the specific battle it needed to win and what winning looked like. That precision translated directly into sharper briefs, clearer creative, and more focused media decisions.
The Positioning Output: What the Analysis Should Produce
Competitive set analysis without a positioning output is incomplete. The point of understanding the competitive landscape is to make a deliberate choice about where you play and how you win.
A useful positioning output from competitive analysis answers four questions. Where are we genuinely differentiated in ways that matter to buyers? Where are we at parity, meaning competitive but not distinctive? Where are we weak, and does that weakness affect the buying decision? And where is the market underserved, meaning where are buyers not well served by any current option?
The fourth question is the most commercially interesting. Gaps in the competitive set are where growth opportunities often live. Early in my career I was focused almost entirely on capturing existing demand, optimising for buyers who were already in-market and comparing options. It took me a while to appreciate that the more durable growth lever is often reaching buyers before they enter the formal consideration process, shaping how they think about the problem before they have defined their competitive set. That is a different kind of competitive advantage, and it is one that performance-only strategies almost never build.
Vidyard’s research into untapped pipeline potential for go-to-market teams points to a similar dynamic: the pipeline problem for most companies is not conversion rate optimisation at the bottom of the funnel, it is the absence of buyers who were never reached in the first place.
Common Mistakes That Undermine the Analysis
The first and most common mistake is defining the set by what is convenient rather than what is accurate. Listing the five companies that came up in a sales meeting is not competitive analysis. It is a starting point at best.
The second mistake is treating the competitive set as fixed. Markets move. New entrants arrive. Incumbents shift their positioning. Customer behaviour changes. A competitive map built eighteen months ago and never updated is not just stale, it is actively misleading. I have seen companies make significant go-to-market decisions based on competitive intelligence that was two years out of date. The market had moved. The analysis had not.
The third mistake is confusing competitive analysis with competitive obsession. Some companies spend so much time watching competitors that they lose sight of customers. The competitive set should inform your positioning relative to alternatives, not dictate your product roadmap or your marketing strategy. If you are building your strategy by reacting to what competitors do, you are always one step behind and you are ceding strategic initiative to someone else.
The fourth mistake is producing analysis that nobody uses. I have seen beautifully designed competitive landscape decks that lived in a shared drive and were referenced once. The analysis needs to connect to a specific decision: a positioning choice, a pricing change, a channel investment, a product priority. Without that connection, it is research for its own sake.
Forrester’s work on go-to-market struggles in complex categories highlights a pattern that applies broadly: companies that underinvest in understanding the competitive context before launch consistently underperform relative to those that do the work upfront. The analysis is not the deliverable. The better decision is.
How Competitive Set Analysis Connects to Channel and Budget Decisions
One dimension of competitive analysis that gets underweighted is channel presence. Understanding where competitors are investing in media and distribution tells you something important about where buyers are being reached and where they are not.
If every competitor in your set is running the same paid search strategy, competing on the same keywords, and showing up in the same comparison content, the cost of competing in that space is high and the differentiation is low. That does not mean you abandon it, but it does mean you should think carefully about where else you can reach buyers before they enter that competitive auction.
Creator and influencer channels are increasingly relevant here. Later’s research on go-to-market strategies with creators reflects a broader shift: buyers in many categories are forming preferences through content and community before they ever run a search query. If your competitive set is entirely absent from those channels, you have an opportunity. If they are present and you are not, you have a problem that a better bid strategy will not solve.
Budget allocation decisions benefit from competitive context in a specific way. If you know where competitors are spending heavily and where they are absent, you can make a more informed choice about where to compete and where to find space. This is not about copying competitor strategy. It is about understanding the competitive economics of each channel before you commit budget to it.
The growth strategy decisions that flow from competitive analysis, including channel mix, positioning, and audience targeting, are covered in more depth across the Go-To-Market and Growth Strategy section of the site, which is worth working through if you are building or revisiting a market entry plan.
Making Competitive Analysis a Repeatable Process, Not a One-Off Project
The companies that use competitive intelligence well treat it as an ongoing capability rather than a project that gets commissioned before a strategy review and forgotten afterward. That does not require a dedicated team or expensive tools. It requires a few disciplines applied consistently.
Win/loss tracking should be standard. Every deal that closes or does not should generate a data point about what the buyer considered and why they chose as they did. That data, aggregated over time, is more valuable than any analyst report because it reflects your specific market, your specific buyers, and your specific competitive situation.
Competitive monitoring should be lightweight but regular. Tracking competitor positioning changes, pricing updates, new product announcements, and significant marketing moves does not need to be a full-time job. A simple process for capturing and sharing competitive signals across the team is enough to keep the picture current.
The BCG work on go-to-market strategy and launch planning makes a point that applies beyond the pharmaceutical context: the competitive landscape at launch is rarely the same as the competitive landscape six months later. Building in a mechanism to update your view of the set is not optional. It is part of the work.
The growth loop thinking that tools like Hotjar apply to product feedback is analogous here: competitive intelligence is most useful when it feeds back into strategy continuously, not when it produces a static output that ages out of relevance.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
