Competitor Analysis: What the Spreadsheet Won’t Tell You
Business competitor analysis is the process of systematically evaluating the companies competing for the same customers you want, to identify where you have an advantage, where you don’t, and what that means for your strategy. Done well, it tells you less about what competitors are doing and more about what the market is actually rewarding.
Most businesses do some version of it. Far fewer do it in a way that changes anything. The gap between those two groups is not methodology. It is what happens after the analysis is complete.
Key Takeaways
- Competitor analysis is only commercially useful if it surfaces a decision, not just a description of the market.
- The most revealing signals are not what competitors say publicly, but what their customers say about them in reviews, forums, and search behaviour.
- Positioning gaps are more strategically valuable than feature gaps. Knowing what no competitor owns in the mind of the buyer is more actionable than knowing who has the better product page.
- Most competitor analysis is done once, at planning time, and then ignored. The businesses that benefit most treat it as a live input, not a one-time exercise.
- The goal is not to beat competitors. It is to understand the competitive environment well enough to make better bets with your budget and your messaging.
In This Article
- Why Most Competitor Analysis Produces Insight Without Action
- What the Spreadsheet Won’t Show You
- The Difference Between Feature Gaps and Positioning Gaps
- How to Read Competitor Behaviour Without Overinterpreting It
- Where Search Data Fits Into Competitor Analysis
- Making Competitor Analysis a Continuous Input, Not an Annual Ritual
- The Commercial Test: Does This Analysis Change a Decision?
- A Note on Tools and Their Limits
- What Good Competitor Analysis Looks Like in Practice
Why Most Competitor Analysis Produces Insight Without Action
I have sat in enough planning sessions to know how competitor analysis usually goes. Someone builds a slide with a 3×3 grid. Logos go in boxes. Columns are labelled with attributes like “digital presence”, “pricing”, and “brand awareness”. Ticks and crosses are applied with more confidence than the evidence warrants. The slide gets presented, people nod, and the planning session moves on to budget allocation.
The analysis becomes a backdrop, not a driver. It confirms what the room already believed, rather than challenging it.
This is not a research problem. It is a framing problem. Competitor analysis that starts with “what are our competitors doing?” will always produce descriptive output. Analysis that starts with “what decisions do we need to make, and what competitive information would change them?” produces something you can actually use.
If you want a broader grounding in the research disciplines that sit underneath this kind of strategic work, the Market Research and Competitive Intel hub covers the full landscape, from audience research to category analysis.
What the Spreadsheet Won’t Show You
The standard competitor analysis template covers the obvious ground: product features, pricing tiers, website traffic estimates, social following, paid search activity, content output. All of that is useful context. None of it is the insight.
The insight lives in the gap between what competitors claim and what customers actually experience. That gap is almost always wider than the competitor’s marketing team would like to admit.
When I was running an agency and pitching for a client in the financial services sector, we did not just look at what the incumbent agency had produced. We looked at what the end customers were saying in review forums, complaint threads, and comparison sites. The gap between the brand’s positioning and the customer’s lived experience was significant. That gap became the strategic foundation for the pitch. We won the account, and that customer sentiment data was the reason the strategy felt grounded rather than theoretical.
The sources that reveal this kind of intelligence are not glamorous. They include:
- App store reviews and G2 or Trustpilot listings for your competitors
- Reddit threads and niche forums where buyers discuss their options
- Search query data showing what people ask when they are evaluating competitors
- Sales team feedback on objections raised when prospects mention a competitor
- Churned customer interviews, if you have access to them
None of this appears in a competitor’s marketing deck. All of it is more strategically relevant than their latest campaign creative.
The Difference Between Feature Gaps and Positioning Gaps
Most competitor analysis focuses on features. Who has what capability, what price point, what channel coverage. This is necessary groundwork, but it leads to a particular kind of strategic response: match or exceed what competitors have. That logic tends to produce incremental thinking and category convergence, which is exactly the opposite of what you want.
Positioning gaps are a different kind of finding. A positioning gap is something the market values that no competitor currently owns clearly in the buyer’s mind. It is not about what you offer. It is about what buyers believe, and what they are not yet being given a coherent reason to believe about anyone in the category.
I spent time working with a business in a crowded B2B software category where every competitor led with “easy to use” and “powerful”. The positioning was indistinguishable across the whole market. The positioning gap was not a feature. It was credibility with a specific buyer persona who felt patronised by the category’s generic messaging. Shifting the positioning toward that buyer’s specific language and concerns, without changing the product at all, produced a measurable improvement in conversion from qualified traffic.
Finding positioning gaps requires a different analytical lens than feature comparison. You are looking for:
- Buyer language that no competitor is using in their messaging
- Objections that the category as a whole fails to address
- Audience segments that competitors acknowledge but do not genuinely serve
- Values or priorities that buyers express in reviews but that no brand reflects back to them
The Forrester perspective on strategic thinking is relevant here: the constraint is rarely the market. It is the assumptions the team brings to the analysis.
How to Read Competitor Behaviour Without Overinterpreting It
One of the more reliable mistakes in competitor analysis is treating competitor activity as evidence of a working strategy. If a competitor is spending heavily on a particular channel, the instinct is to assume they know something you don’t. Sometimes that is true. Often it is not.
I judged the Effie Awards for a period, which means I read a lot of case studies submitted by agencies and brands claiming effectiveness. What struck me most was how many campaigns looked confident from the outside and were, by the brand’s own admission in the case study, built on assumptions that turned out to be wrong. The confidence of execution is not evidence of strategic soundness. It is evidence that someone committed to a direction.
When you observe a competitor doing something consistently, the useful questions are:
- Is there evidence this is working, or are they simply continuing because stopping would require an internal admission of failure?
- Does this activity make sense given what we know about their customer base and revenue model?
- Could they be doing this for internal reasons (a new CMO, a board mandate, a legacy commitment) rather than strategic ones?
Competitor behaviour is data. It is not instruction. The businesses that treat it as instruction end up chasing a strategy they did not design and do not fully understand.
Where Search Data Fits Into Competitor Analysis
Search query data is one of the most underused inputs in competitor analysis, particularly for businesses that think of SEO as a separate workstream from strategy. It should not be separate. Search behaviour is a direct record of what buyers are thinking, asking, and comparing, in language they chose themselves.
Looking at the search terms that include a competitor’s brand name alongside comparison language (“vs”, “alternative”, “review”, “problem with”) tells you what buyers are uncertain about, what they are trying to resolve, and what the competitor has failed to address convincingly. That is a strategic input, not an SEO tactic.
The broader category of non-branded search, particularly question-based queries, reveals what buyers are trying to understand before they make a decision. If your competitors are not answering those questions well, and you are, that is a durable advantage, not just a content win.
The search landscape itself is also worth monitoring as a strategic signal. The Search Engine Journal piece on Google’s long-term dominance is a useful reminder that channel assumptions deserve periodic scrutiny, even when the channel feels settled.
Making Competitor Analysis a Continuous Input, Not an Annual Ritual
The planning cycle creates a natural incentive to treat competitor analysis as a once-a-year exercise. You do it in Q4, it feeds the plan, and then it sits in a folder until next Q4. The problem is that markets do not move on an annual cycle. Competitors launch things. Pricing changes. New entrants appear. Customer sentiment shifts.
When I was scaling an agency from around 20 people to over 100, one of the disciplines we built into the team was a lightweight competitive monitoring rhythm. Not a quarterly report. A simple process where account teams flagged notable competitor moves as they happened, with a short note on whether it warranted a response. Most of the time it didn’t. But when it did, we were not catching up six months later.
The tools for this are not complicated. Google Alerts for competitor brand names. Regular checks on their job listings (hiring patterns reveal strategic priorities). Periodic review of their content output and paid search activity. A shared channel or document where the team can log observations without it becoming a reporting burden.
The goal is not comprehensive surveillance. It is maintaining enough awareness that you are not surprised by things that were visible in advance.
The Commercial Test: Does This Analysis Change a Decision?
This is the question I apply to most marketing work, and competitor analysis is no exception. If the output of the analysis does not change at least one decision, the analysis was not worth doing. Not because the information is wrong, but because information that confirms existing assumptions and produces no change in behaviour is not strategically useful. It is expensive reassurance.
The decisions that good competitor analysis should be able to inform include:
- Which audience segments to prioritise, and which to deprioritise, based on where competitors are weakest
- Which channels to invest in, and which to treat as maintenance rather than growth
- How to position the brand relative to the category, and what language to use or avoid
- Where to set pricing, and what the market will and will not bear
- Which product or service areas represent genuine differentiation versus table stakes
If the analysis cannot speak to any of those questions, it is probably too descriptive. The fix is not more data. It is a sharper brief at the start, one that specifies what decisions the analysis needs to inform before the research begins.
Effective competitor analysis is one component of a broader strategic research practice. If you are building or reviewing that practice, the resources in the Market Research and Competitive Intel hub cover the full range of methods and frameworks worth understanding.
A Note on Tools and Their Limits
The market for competitive intelligence tools has grown considerably. Traffic estimation platforms, ad transparency libraries, content gap analysers, social listening tools. They are useful. They are also a perspective on reality, not reality itself.
Traffic estimates from third-party tools can be significantly wrong, particularly for smaller or more niche businesses. Paid search data shows what ads are running, not which ones are working. Social listening captures volume and sentiment at a surface level, but misses context and nuance that a human reading the same content would catch.
I have seen planning decisions made on the basis of tool-generated data that turned out to be materially inaccurate. Not because the tools are bad, but because the team treated the output as fact rather than as an approximation that needed triangulation with other sources.
Use the tools. But verify the most important findings through direct observation, customer conversations, or first-party data before building strategy on them. The cost of triangulation is low. The cost of a strategy built on bad data is not.
Platforms like Optimizely are a useful illustration of how enterprise software positions itself in a competitive category. Looking at how a market leader structures its messaging and product narrative is itself a form of competitive intelligence, regardless of whether you are in that sector.
What Good Competitor Analysis Looks Like in Practice
To make this concrete: a business in a competitive professional services category asked us to help them understand why their conversion rate from qualified leads was lower than it should have been, given the quality of the product. The instinct from the internal team was that the issue was pricing. Our hypothesis was different.
We ran a structured competitor analysis focused specifically on the decision stage of the buyer experience. We looked at how competitors presented proof, handled objections, and structured their sales conversations. We reviewed what buyers said in forums and review sites about how they had made their final decision in this category.
The finding was not about pricing. It was about the absence of a specific type of social proof at the decision stage. Competitors who were winning more consistently were providing case studies in a format that mapped directly to the buyer’s own situation, rather than generic success stories. The business we were working with had strong case studies, but they were structured around the vendor’s process rather than the buyer’s outcome.
Changing the format and framing of the case study content, without changing the pricing or the product, produced a measurable improvement in close rate within a single quarter. That is what competitor analysis is supposed to do. Not produce a slide. Produce a decision that changes performance.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
