Researching Competitors: What to Look For and What to Ignore

Researching competitors is the process of systematically gathering intelligence on the businesses competing for your customers, your budget, and your market position. Done well, it tells you where competitors are investing, where they are vulnerable, and where they are simply making noise. Done poorly, it produces a slide deck full of screenshots that nobody acts on.

The challenge is not finding information. There is more competitive data available today than most marketing teams know what to do with. The challenge is knowing which signals matter, which are misleading, and which are just competitors being busy rather than effective.

Key Takeaways

  • Competitor research is only useful when it connects to a decision. If it does not change what you do, it was an exercise in curiosity, not strategy.
  • Most competitive data tells you what a competitor is doing, not whether it is working. Those are very different things.
  • The strongest signals come from combining multiple sources: search behaviour, ad activity, job postings, pricing pages, and customer reviews tell a more complete story than any single tool.
  • Copying a competitor’s approach is the slowest route to differentiation. Use research to find the gaps they are leaving, not the playbook they are running.
  • Competitive intelligence programmes fail most often because they have no owner, no cadence, and no clear link to planning cycles.

If you are building out a broader research capability, the market research hub covers the full landscape, from customer insight to category analysis to competitive intelligence frameworks. This article focuses specifically on competitor research: what to look at, how to interpret it, and where most teams go wrong.

Why Most Competitor Research Produces Nothing Useful

I have sat in more competitive review sessions than I can count. The format is almost always the same: someone shares a slide showing what three or four named competitors are doing on social media, what their homepage says, and maybe a rough sense of their pricing. Then the room nods, someone says “interesting,” and the meeting ends with no decision made and no behaviour changed.

The problem is not the data. The problem is that the research was not connected to a question. Nobody defined what they were trying to decide before they started looking. So the output is descriptive rather than diagnostic. It tells you what exists, not what it means.

Good competitor research starts with a specific question. Are you trying to understand why a competitor is growing faster than you? Are you assessing whether a new market entrant poses a genuine threat? Are you looking for whitespace in messaging that you could own? The question shapes everything: which competitors you look at, which signals matter, and what you do with the output.

Without that anchor, you end up with what I call competitive tourism. Interesting to look at. Goes nowhere.

Which Competitors Are Actually Worth Researching?

Not all competitors deserve equal attention, and spreading your research effort evenly across every business in your category is a good way to learn a lot about things that do not affect your results.

I tend to think about competitors in three tiers. The first tier is direct competitors: businesses targeting the same customers with a similar proposition at a similar price point. These are the ones you track consistently. The second tier is indirect competitors: businesses solving the same problem in a different way, or targeting an adjacent segment. These matter because they define the boundaries of your category and often signal where the market is heading. The third tier is aspirational competitors: businesses that are bigger, better resourced, or operating in markets you want to enter. You look at these occasionally, for inspiration and for early warning signals, but you do not obsess over them.

The mistake I see most often is spending 80% of research time on tier-three competitors because they are more interesting to look at. A mid-size B2B software company spending three hours analysing what Salesforce is doing with its content strategy is not doing competitive intelligence. It is displacement activity dressed up as research.

Focus your effort where the competitive dynamic is actually live. That usually means your tier-one competitors, and occasionally whoever is growing fastest in your category regardless of their current size.

What Signals Actually Tell You Something

There are a handful of signal categories that consistently produce actionable intelligence. Here is how I think about each one.

Search investment and keyword strategy

Where a competitor is spending on paid search, and which terms they are bidding on, tells you a lot about their commercial priorities. If a competitor starts bidding on your brand terms, that is a direct competitive move worth responding to. If they are investing heavily in a category of keywords you have been ignoring, that is worth understanding. Are they seeing demand you have not spotted? Or are they making an expensive mistake?

Organic search is equally revealing. The topics a competitor is building content around reflect where they believe future demand is coming from. If three of your direct competitors are all publishing heavily into a specific topic cluster and you are not present there, that is either a gap you should close or a crowded space you should consciously avoid. Either way, it is a decision, not an accident.

Tools like Semrush and Ahrefs give you a reasonable approximation of this. They are not perfect. Estimated traffic figures should be treated as directional rather than precise. But the relative picture, which competitors are investing in search, in which directions, and with what apparent momentum, is genuinely useful.

Advertising activity and creative direction

Ad libraries have made paid social more transparent than it has ever been. Meta’s Ad Library shows you every active ad a competitor is running, how long it has been running, and in some cases the formats they are testing. An ad that has been running for three months is almost certainly performing. An ad that disappeared after two weeks probably was not.

What you are looking for is not creative inspiration. You are looking for strategic signals. What problem are they leading with? What audience are they speaking to? Are they competing on price, on quality, on speed, on trust? How has their messaging shifted over the past six to twelve months? Messaging drift often signals a strategic pivot before the pivot is announced anywhere.

When I was running agency teams managing large paid media budgets, we would regularly use ad library data not to copy what competitors were doing, but to identify the claims they were not making. If every competitor in a category was leading with price, and nobody was talking about reliability or service quality, that was a signal worth testing. Not because the gap was automatically an opportunity, but because it was worth asking why nobody was standing there.

Hiring patterns and team structure

Job postings are underused as a competitive signal. A competitor hiring three performance marketing managers and two data analysts is telling you something about where they are investing. A competitor hiring a VP of Partnerships is signalling a channel shift. A competitor who has posted the same role four times in six months is signalling internal instability.

LinkedIn is the obvious place to look, but company career pages are often more complete. The job descriptions themselves are worth reading carefully. The skills they are asking for, the tools they mention, the language they use to describe the role, all of that reflects how the business is thinking about its own growth.

This is one of the most reliable leading indicators I have found. Headcount decisions are expensive and slow to reverse, which means they tend to reflect genuine strategic intent rather than short-term tactics.

Customer reviews and sentiment

G2, Trustpilot, Capterra, Google reviews, and category-specific review platforms are an unfiltered window into what customers actually think about your competitors. Not what the competitor says about itself, but what real customers are saying when they have something to gain or lose from being honest.

Look for patterns rather than individual reviews. What do the one-star reviews consistently mention? What do the five-star reviews consistently praise? The gap between those two things is often where a competitor’s proposition is genuinely strong or genuinely weak. If customers consistently complain that a competitor’s onboarding is slow and confusing, and your onboarding is a strength, that is a message worth amplifying. If customers consistently praise a competitor’s customer service and yours is average, that is a vulnerability worth addressing.

The Forrester perspective on marketing planning makes a point that holds here: customer insight should be driving strategy, not confirming it. Review mining is one of the cheapest and most honest forms of customer insight available, and most marketing teams barely look at it.

Pricing and packaging changes

Pricing page changes are worth monitoring regularly. A competitor shifting from per-seat pricing to usage-based pricing is making a commercial model decision that has implications for how they sell, who they sell to, and what their unit economics look like. A competitor introducing a free tier is usually a sign they are trying to expand their addressable market or accelerate top-of-funnel volume.

Tools like Wayback Machine let you look at historical versions of a competitor’s pricing page. It is not glamorous research, but it is often more revealing than anything a tool will surface automatically. Pricing decisions are strategic decisions. They reflect how a business is thinking about growth, margin, and competitive positioning all at once.

What Competitive Data Cannot Tell You

This is where I think a lot of competitive research goes wrong. People treat the data as a window into a competitor’s strategy when it is really just a window into their activity. Those are not the same thing.

You can see that a competitor is running a lot of display advertising. You cannot see whether it is working. You can see that they have published forty blog posts this quarter. You cannot see whether any of them are driving revenue. You can see that they have increased their headcount by 30%. You cannot see whether that growth is sustainable or whether it is being funded by a VC round that will run out in eighteen months.

I spent time judging the Effie Awards, which recognise marketing effectiveness. One thing that process teaches you quickly is that the campaigns that look most impressive from the outside are not always the ones that performed best commercially. Visibility and effectiveness are not the same metric. A competitor who is very visible might be spending inefficiently. A competitor who is quiet might be doing very precise, high-ROI work that simply does not show up in the places you are looking.

The discipline is to treat competitive data as a prompt for questions, not as a source of answers. When you see a competitor doing something unexpected, the right response is to ask why, not to copy it.

How to Build a Research Rhythm That Actually Gets Used

The biggest structural failure in competitive intelligence programmes is the absence of a cadence. Research happens in bursts, usually triggered by a competitive threat that has already materialised, and then stops. By the time the next review happens, the data is stale and the context has shifted.

What works better is a tiered monitoring rhythm. Weekly, you track fast-moving signals: new ads, significant content activity, pricing changes, major announcements. Monthly, you do a more structured review of search performance trends, hiring patterns, and sentiment shifts. Quarterly, you do a deeper synthesis that feeds into planning. Annually, you do a full competitive landscape review that challenges your assumptions about who your real competitors are and where the category is heading.

The other structural requirement is ownership. Competitive intelligence without a named owner becomes nobody’s job. It does not need to be a full-time role, but it needs to be someone’s explicit responsibility to maintain the cadence, synthesise the signals, and bring the output to the people who can act on it.

At one agency I ran, we built a simple competitive monitoring process into the planning cycle for every major client account. It was not sophisticated. It was a shared document, updated monthly, covering five competitors per client across six signal categories. What it produced was not perfect intelligence. But it meant that when a competitor made a significant move, we had context. We could see whether it was a new direction or a continuation of something we had already spotted. That context is what turns data into judgement.

The Difference Between Competitive Research and Competitive Obsession

There is a version of competitor research that is healthy and a version that is corrosive. The healthy version informs your strategy. The corrosive version replaces it.

I have worked with marketing teams that spent so much time watching what competitors were doing that they lost the ability to make independent decisions. Every campaign brief started with “our competitors are doing X, so we should do Y.” Every messaging discussion circled back to what somebody else had said first. The result was a permanent state of reactive positioning, always one step behind, never owning anything distinctive.

The purpose of competitive research is to make better decisions about your own strategy, not to outsource your strategy to your competitors. If the output of your research is always “do what they are doing but slightly better,” you are not doing competitive intelligence. You are doing competitive imitation, and it is a slow route to irrelevance.

The most useful competitive insight I have ever seen produced was not about what a competitor was doing. It was about what no competitor was doing. A gap in the market. A customer need that everyone was ignoring. A message that nobody had claimed. That kind of insight only comes when you are looking at the competitive landscape with the question “what is missing here?” rather than “what should we copy?”

Social listening is one area where this kind of gap-finding is particularly productive. Platforms like Later’s social media glossary cover the mechanics of social monitoring, but the strategic application is simpler: watch what customers are saying about the category, not just about individual competitors. Where is the frustration? Where is the unmet expectation? That is where the opportunity tends to live.

Connecting Research to Decisions

Every piece of competitive research should end with a recommendation or a decision. Not a summary of findings. Not a list of observations. A recommendation or a decision.

If your competitive review produces a twenty-slide deck and no clear action, the research has not done its job. The job is not to inform. The job is to improve decisions. Those are related but not identical.

The framing I use is simple. For each significant competitive signal, ask three questions. What does this tell us? What does it mean for our strategy? What should we do differently as a result? If you cannot answer all three, the signal is not ready to act on yet. Keep watching, but do not pretend you have insight when you only have information.

Effective competitor research is in the end a discipline of translation: turning market observation into commercial judgement. The tools help. The frameworks help. But neither replaces the willingness to sit with ambiguous data and make a call. That is the part that cannot be automated, and it is the part that separates teams that use competitive intelligence well from teams that just collect it.

For more on how competitive intelligence fits into a broader research and planning process, the market research hub covers everything from customer insight methodologies to category-level analysis. Competitor research works best when it sits inside a wider understanding of your market, not as a standalone exercise.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the most important thing to look for when researching competitors?
The most useful thing to look for is not what competitors are doing, but what they are consistently not doing. Gaps in messaging, underserved customer needs, and uncontested market positions are where competitive research produces genuine strategic value. Activity monitoring tells you what exists. Gap analysis tells you what is possible.
How often should you review your competitors?
A tiered cadence works best. Fast-moving signals like ad activity and pricing changes are worth checking weekly. Search performance trends and hiring patterns suit a monthly review. A deeper strategic synthesis should happen quarterly and feed into planning. Annual landscape reviews challenge your assumptions about who your real competitors are. The exact frequency matters less than having a consistent rhythm with a named owner.
Can you tell if a competitor’s marketing is actually working from the outside?
Rarely, and this is one of the most common mistakes in competitive research. You can see what a competitor is doing, but not whether it is effective. An ad running for three months is a reasonable signal of performance. A large volume of content activity tells you very little about commercial impact. Treat competitive data as a prompt for questions rather than a source of answers about effectiveness.
Which free tools are useful for competitor research?
Meta’s Ad Library shows active paid social ads across Facebook and Instagram at no cost. Google’s Keyword Planner gives a rough sense of search demand. Wayback Machine lets you track historical changes to competitor websites and pricing pages. LinkedIn is useful for monitoring hiring patterns. Customer review platforms like G2, Trustpilot, and Google reviews provide unfiltered sentiment data. These free sources, used consistently, often produce more actionable insight than expensive tools used sporadically.
How do you avoid becoming too focused on competitors at the expense of your own strategy?
Set a clear purpose for every competitive research exercise before you start. Define the decision you are trying to inform. If you cannot name a specific decision, the research is likely to produce observation rather than strategy. Use competitive data to pressure-test your own thinking, not to replace it. The output of good competitive research should be a sharper view of your own positioning, not a replica of someone else’s playbook.

Similar Posts