Competitive Landscape Research: What Most Teams Get Wrong

Competitive landscape research is the process of systematically mapping who you compete with, how they position themselves, where they invest, and what that tells you about the market you are operating in. Done well, it shapes strategy. Done badly, it produces a slide deck that gets presented once and never opened again.

Most teams do it badly. Not because they lack tools or data, but because they start with the wrong question. They ask “what are our competitors doing?” when they should be asking “what does competitor behaviour tell us about where this market is going?”

Key Takeaways

  • Competitive landscape research is only useful if it informs a decision. If it ends as a slide, it has failed.
  • Most competitor analysis focuses on surface signals like creative and messaging, and misses the structural signals like pricing, distribution, and category investment patterns.
  • The most revealing competitive data is often free and hiding in plain sight: job postings, agency relationships, planning applications, and earnings calls.
  • A competitor moving aggressively into a channel is not always a signal to follow. It may be a signal that the channel is about to get expensive or crowded.
  • Competitive research should feed a live planning process, not a quarterly report that arrives too late to change anything.

Why Most Competitive Research Produces the Wrong Output

When I was running agencies, I sat through hundreds of competitive reviews. The format was almost always the same: a grid showing competitor A, B, and C across a set of criteria, some screenshots of their ads, a note about their social following, and a conclusion that amounted to “they are doing a lot of digital.” Occasionally someone had pulled their estimated media spend from a syndicated tool and presented it as fact.

The problem was not the effort. The problem was that nobody had asked what decision the research was supposed to support. Competitive analysis without a decision at the end of it is just surveillance. It feels productive. It rarely is.

Before you commission or run any competitive research, write down the decision you are trying to make. Are you deciding whether to enter a new category? Whether to reposition against a specific competitor? Whether your pricing is defensible? The research methodology, the data sources, and the level of depth all change depending on the answer. A positioning decision needs different intelligence than a media planning decision.

If you are building out a broader market intelligence capability, the Market Research and Competitive Intel hub on this site covers the full range of research methods, from audience analysis to trend mapping, and is worth reading alongside this article.

How Do You Actually Map a Competitive Landscape?

Mapping a competitive landscape starts with defining your competitive set correctly, and most teams get this wrong from the start. They list their direct competitors and stop there. But the most dangerous competition is often indirect: a different category solving the same customer problem, a new entrant with a different business model, or a platform that is quietly disintermediating the whole sector.

I think about competitive sets in three rings. The first ring is direct competitors: same product, same customer, same occasion. The second ring is indirect competitors: different product, same customer need. The third ring is category disruptors: businesses that are redefining what the category even is. You need intelligence on all three, but the depth of analysis should differ. Most of your operational decisions will be shaped by ring one. Most of your strategic risk sits in rings two and three.

Once you have defined the set, you need to decide what dimensions to map across. I use six: positioning and messaging, product or service structure, pricing architecture, distribution and channel strategy, media investment and channel mix, and organisational signals. That last one gets overlooked almost universally, and it is often the most revealing.

What Are the Best Data Sources for Competitive Intelligence?

There is a tendency to reach immediately for paid tools when doing competitive research. SimilarWeb for traffic estimates, SEMrush or Ahrefs for search visibility, Pathmatics or Nielsen for media spend. These tools have their place, but they are proxies. They are someone else’s model of reality, not reality itself. I have seen media spend estimates from syndicated tools that were off by a factor of three. Treat them as directional signals, not ground truth.

The most underused competitive data sources are free and hiding in plain sight. Job postings tell you where a competitor is investing before the investment shows up anywhere else. If a direct competitor posts fifteen data engineering roles in a quarter, they are building a data capability. If they are suddenly hiring performance marketing managers in three new markets, they are expanding. Organisational moves precede market moves by six to twelve months in most cases.

Earnings calls and investor presentations are another goldmine, particularly for publicly listed competitors. Executives talk in detail about strategic priorities, category dynamics, and market share in formats that are designed to be honest because they are legally required to be. The language is corporate but the signal is real. I have pulled more useful competitive intelligence from a forty-minute earnings call transcript than from a full syndicated competitive report.

Planning applications, trade press, agency relationship announcements, and conference speaking slots all add texture. When a competitor hires a new agency, that is a signal that their strategy is about to change. When their CMO speaks at a retail conference for the first time, that may signal a channel shift. None of these signals is definitive on its own. Assembled together, they form a picture.

For search-based competitive intelligence specifically, organic search visibility is one of the most reliable public signals of content and SEO investment. A competitor with rapidly growing organic visibility is either investing heavily in content or has made a structural change to their site architecture. Either way, it is worth understanding why. Tools like Ahrefs and SEMrush are genuinely useful here, as long as you treat the absolute numbers with appropriate scepticism and focus on directional trends instead.

How Do You Read Competitor Positioning Without Getting Distracted by Creative?

This is where a lot of competitive analysis loses the plot. Teams spend hours cataloguing competitor ads, social posts, and campaign creative, and end up with a mood board rather than a strategic insight. Creative is the output of a positioning decision. If you want to understand the positioning, you need to get behind the creative to the underlying claim.

The question to ask about any competitor’s messaging is: what is the single thing they most want their target customer to believe about them? Not what they say, but what they want the customer to hold in their mind. That is their positioning. Once you have identified it for each competitor, you can map the territory. Where is it crowded? Where is it empty? Where are competitors making claims that the market does not currently believe?

I judged the Effie Awards for several years, which meant reading hundreds of case studies where brands had to demonstrate that their positioning actually changed behaviour, not just prompted awareness. The cases that failed most consistently were the ones where the brand had chosen a positioning that was either identical to a competitor or so generic it could have applied to anyone in the category. “Quality you can trust.” “Always there for you.” These are not positions. They are placeholders.

When you are mapping competitor positioning, look for the white space: the credible claim that no competitor currently owns and that your target customer actually cares about. That is where differentiated positioning lives. The BCG analysis of how discounters reshaped the grocery market is a good illustration of what happens when a new entrant identifies and occupies white space that incumbents have dismissed or ignored. The incumbents had done competitive analysis. They just drew the wrong conclusions from it.

What Does Competitor Media Behaviour Actually Tell You?

Media investment patterns are one of the most misread signals in competitive analysis. The instinct is to treat a competitor’s channel mix as a recommendation: they are on YouTube, therefore we should be on YouTube. This is backwards. A competitor’s channel choice tells you something about their strategy and their audience, but it does not tell you whether the channel is working for them, whether it is already saturated, or whether it is appropriate for your objectives.

Early in my career at lastminute.com, I ran a paid search campaign for a music festival that generated six figures of revenue in roughly a day from a relatively straightforward setup. The channel was underpriced, the intent was clear, and the competition was thin. That same campaign run two years later, when every competitor had figured out the same playbook, would have been a fraction as efficient. Channel timing matters as much as channel choice. A competitor moving aggressively into a channel may be signalling that it works. Or it may be signalling that it is about to get expensive.

What media analysis should tell you is: where is the category investing, is that investment growing or contracting, and is there a channel where your competitors are absent that your audience uses? The absence is often more interesting than the presence. If a competitor with strong brand awareness is investing almost nothing in paid search, that may represent an opportunity. Or it may mean they have tested it and found it does not work in this category. You need to know which.

Share of voice relative to share of market is a useful lens here. Brands that over-invest in share of voice relative to their current market share tend to grow. Brands that under-invest tend to shrink. Tracking this ratio across your competitive set over time gives you a forward-looking signal about who is likely to gain ground and who is likely to lose it.

How Do You Turn Competitive Research Into a Decision?

This is the step that most competitive research never reaches. The data gets collected, the analysis gets done, the presentation gets made, and then everyone nods and moves on. Nothing changes. No decision gets made. The research becomes a record of what competitors were doing at a point in time, not an input to strategy.

The discipline I have found most useful is to end every competitive analysis with three things: what this changes about our current plan, what it confirms, and what we need to watch. Not recommendations in the abstract. Specific changes to specific decisions. If the research does not change anything, either the research was not deep enough or the strategy is already optimal. In my experience, it is usually the former.

When I was growing an agency from around twenty people to over a hundred, competitive intelligence was not a quarterly exercise. It was continuous and operational. We tracked competitor hiring, monitored their client wins and losses, read every piece of trade press, and maintained a live view of where the market was moving. That intelligence shaped which capabilities we built, which sectors we prioritised, and how we priced. It was not academic. It was commercial.

For competitive research to feed decisions, it needs to be connected to a planning cycle with enough lead time to actually influence the plan. Intelligence that arrives after the budget has been set is interesting but not useful. Build the research cadence around your planning calendar, not around when someone has bandwidth to do it.

What Are the Common Traps in Competitive Landscape Research?

Confirmation bias is the most dangerous. When you already have a strategic preference, competitive research has a way of finding evidence that supports it. The competitor data gets filtered through the lens of what you already believe, and the result is a report that validates a decision that was already made. This is not analysis. It is post-rationalisation with a slide deck attached.

The antidote is to assign someone the specific job of arguing against the prevailing view. Before any competitive review, write down the conclusion you expect to reach. Then task someone with finding the evidence that contradicts it. This is not a popular exercise, but it is a revealing one.

The second trap is recency bias. Competitive analysis tends to over-index on what competitors are doing right now and under-index on the trajectory. A competitor who has been declining for three years but just launched a new campaign is not necessarily a threat. A competitor who has been quietly building capability in a new channel for eighteen months is. The trend matters more than the current state.

The third trap is mistaking activity for strategy. Competitors who are very visible, who post constantly, who launch frequently, are not necessarily executing a coherent strategy. Sometimes high activity is a sign of strategic confusion rather than strategic strength. Do not confuse noise with signal. The question is not what they are doing, but why, and whether it is working.

There is a useful parallel here with how attention actually works in content: the things that get noticed are not always the things that matter. Competitive analysis has the same problem. The competitor with the loudest presence is not always the one you should be most focused on.

How Often Should You Run Competitive Landscape Research?

The answer depends on how fast your market moves, but the framing of “running” competitive research as a periodic project is itself part of the problem. The best competitive intelligence is continuous and lightweight rather than periodic and heavy. A quarterly deep-dive plus ongoing monitoring is more useful than an annual comprehensive review that takes three months to produce and is out of date by the time it lands.

Set up monitoring that runs automatically: Google Alerts for competitor brand names and key executives, RSS feeds for trade press, saved searches on LinkedIn for competitor hiring, and a shared document where the team can log observations in real time. This does not replace structured analysis, but it means that when you do sit down to do the structured work, you are not starting from scratch. You are synthesising a stream of intelligence that has been accumulating continuously.

The structured analysis, the kind that produces a view of competitive positioning, share of voice, and strategic direction, should happen at least twice a year and always before a major planning cycle. If your planning cycle is annual, run the competitive review six to eight weeks before the planning process starts, not during it. You need time to absorb what you have found before you start making decisions based on it.

There is more on building research into a live planning process in the Market Research and Competitive Intel section of this site, including how to structure ongoing audience and market monitoring alongside competitive tracking.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is competitive landscape research?
Competitive landscape research is the process of systematically mapping the competitors in your market, how they are positioned, where they invest, and what their behaviour signals about the direction of the category. It covers direct competitors, indirect competitors, and potential disruptors, and is used to inform positioning, pricing, channel, and investment decisions.
What data sources should I use for competitive research?
A combination of paid tools and free sources works best. Paid tools like SEMrush, Ahrefs, and SimilarWeb provide directional signals on search visibility and traffic. Free sources, including job postings, earnings call transcripts, planning applications, trade press, and agency relationship announcements, often provide sharper strategic intelligence. Treat all estimated data as directional rather than definitive.
How do you identify white space in a competitive landscape?
Map the positioning claim each competitor most wants their target customer to hold in their mind. Once you have done this for each player in your competitive set, look for credible claims that no competitor currently owns and that your target customer genuinely cares about. That gap is white space. The key test is whether the claim is both credible for your brand and meaningful to the customer, not just unoccupied.
How often should competitive landscape research be updated?
Continuous lightweight monitoring, through alerts, RSS feeds, and shared observation logs, should run all the time. Structured competitive analysis, covering positioning, share of voice, and strategic direction, should happen at minimum twice a year and always six to eight weeks before a major planning cycle. Markets that move quickly may require more frequent structured reviews.
What is the most common mistake in competitive landscape research?
Producing research that ends as a presentation rather than a decision. Competitive analysis only has value if it changes or confirms something specific in your strategy or plan. Before starting any competitive research, define the decision it is meant to support. If you cannot name the decision, the research will almost certainly produce a document that gets filed rather than used.

Similar Posts