Competitor Rankings: What the Data Tells You and What It Doesn’t
Competitor rankings are a measure of relative market position, typically assessed through search visibility, share of voice, pricing position, or customer perception scores. They tell you where you stand today. What they rarely tell you is why you’re there, or what it would actually take to change it.
That gap between the number and the insight is where most competitive analysis breaks down. Marketers pull a ranking report, see they’re sitting third or fifth or eighth, and immediately start asking how to move up. A better question is whether the ranking reflects a real competitive problem or a measurement artefact. Sometimes it’s both. Sometimes it’s neither.
Key Takeaways
- Competitor rankings are a snapshot, not a diagnosis. They show position, not the underlying commercial dynamics that created it.
- Search ranking and market ranking are not the same thing. A competitor can dominate organic results while losing on revenue, retention, and margin.
- The most dangerous competitor in any ranking report is often the one you’re not measuring against.
- Ranking movement without a clear commercial hypothesis behind it is activity, not strategy.
- The right question is not “how do we rank higher?” but “what would ranking higher actually change for the business?”
In This Article
- Why Competitor Rankings Feel More Definitive Than They Are
- The Different Types of Competitor Rankings and What Each One Measures
- How to Build a Competitor Ranking Framework That Holds Up Commercially
- The Competitors Your Rankings Report Is Probably Missing
- When Improving Your Competitor Ranking Is the Right Priority
- Making Competitor Rankings Useful in Practice
Why Competitor Rankings Feel More Definitive Than They Are
There is something psychologically satisfying about a ranked list. Position 1 is better than position 4. A share of voice of 38% is better than 22%. These numbers have the appearance of objectivity, and in a discipline where so much is contested, that feels reassuring.
The problem is that the ranking is always a function of the metric you chose to measure, the competitive set you chose to include, and the time window you happened to look at. Change any of those variables and your position changes. That doesn’t make rankings useless. It makes them contextual, and context is exactly what most ranking reports strip away.
Early in my career, I watched a client obsess over a competitor who was consistently outranking them in paid search. They restructured campaigns, increased bids, and spent months trying to close the gap. What they hadn’t done was check whether that competitor was actually converting the traffic they were capturing. When we eventually pulled the data, it turned out the competitor had a landing page conversion rate that was a fraction of our client’s. They were winning the ranking battle and losing the commercial one. The number looked like a threat. The reality was closer to an expensive lesson for someone else.
If you’re building a more rigorous approach to how you track and interpret competitive position, the broader Market Research and Competitive Intel hub covers the methodologies and frameworks that make this kind of analysis more useful in practice.
The Different Types of Competitor Rankings and What Each One Measures
Competitor rankings are not a single thing. The term gets used to describe at least four distinct types of competitive measurement, and conflating them is one of the more common analytical mistakes in marketing.
Search rankings measure organic or paid visibility in search engine results for specific keyword sets. They are a proxy for discoverability, not for market leadership. A brand can rank first for every keyword in its category and still be losing revenue share to a competitor who acquires most of its customers through referral, partnership, or direct outreach.
Share of voice measures the proportion of category-level advertising or media presence a brand owns relative to competitors. It has a reasonable correlation with market share over time, but the relationship is not mechanical and the lag can be long. Share of voice data is also heavily dependent on what channels you’re measuring. A brand that dominates TV but is invisible in digital will look very different depending on where you point the instrument.
Pricing position ranks brands by where they sit on the price spectrum within a category. This matters most in markets where price is a primary purchase driver, but even there, perceived value complicates the picture. A brand priced 20% above the category average might still be winning on volume if the customer perception of quality supports the premium.
Customer perception rankings, often derived from survey data or review aggregation, measure how customers rate brands on dimensions like quality, trust, service, or value. These are arguably the most commercially meaningful rankings because they reflect what customers actually think, rather than what an algorithm or a media buyer has decided. They are also the hardest to move quickly, which is why they tend to get less attention in quarterly reviews.
Each of these ranking types answers a different question. Using one as a stand-in for the others produces analysis that looks thorough but isn’t.
How to Build a Competitor Ranking Framework That Holds Up Commercially
The most useful competitor ranking frameworks I’ve worked with share a few structural features. They define the competitive set with intention rather than convenience. They track multiple ranking dimensions rather than one. And they connect ranking movement to a commercial hypothesis rather than treating position improvement as an end in itself.
Defining the competitive set is harder than it sounds. The instinct is to list the brands that come up most often in sales conversations or that appear in the same category on a comparison site. That’s a reasonable starting point, but it tends to produce a set that reflects the past rather than the present. Markets shift. New entrants appear. Adjacent categories start solving the same customer problem with a different product. If your competitive set hasn’t changed in two years, it’s probably wrong.
When I was running agency operations at iProspect, we grew the team from around 20 people to over 100 across a period of significant market change. One of the things that made that possible was treating the competitive landscape as a live question rather than a settled fact. The agencies we were competing against in year three were not the same ones we’d been measuring against in year one. The ones we’d been watching had either grown, contracted, or been acquired. New challengers had emerged from consulting firms and in-house teams. If we’d kept scoring ourselves against the original list, we’d have been optimising for a market that no longer existed.
Tracking multiple dimensions means accepting some analytical complexity. You will have competitors who outrank you on search but underperform on customer satisfaction. You will have competitors who are priced above you but are growing faster. These apparent contradictions are not noise to be smoothed away. They are the signal. They tell you which dimension of competitive position is actually driving commercial outcomes in your category right now.
Connecting ranking movement to a commercial hypothesis is the discipline that most teams skip. If you improve your search ranking from position 4 to position 2 for a cluster of high-intent keywords, what should happen to qualified traffic? What should happen to conversion volume? What should happen to revenue? If you can’t answer those questions before you start the work, you won’t be able to evaluate whether the work succeeded when it’s done. You’ll just have a better number with no way to know whether it mattered.
The Competitors Your Rankings Report Is Probably Missing
Standard competitor ranking reports tend to measure the brands you already know about. That’s a structural problem because the most commercially significant competitive threats often come from outside the conventional category definition.
When I was at lastminute.com, the competitive frame for travel was other online travel agencies. That was where the ranking attention went. What the ranking reports were slower to capture was the growing influence of metasearch and the way it was changing the economics of customer acquisition across the whole category. The threat wasn’t a competitor in the traditional sense. It was a structural shift in how the market worked. Ranking reports that only measured brand-versus-brand position weren’t built to see that.
The same pattern appears in almost every mature category. The competitor that eventually takes significant market share is often one that existing players weren’t measuring against because it didn’t look like a competitor yet. It looked like a different business model, or a different customer segment, or a different channel. By the time it showed up in the standard ranking reports, the strategic window had already narrowed.
A more useful approach is to periodically ask what problem your product solves and then map every way a customer could solve that problem, including ways that don’t involve buying from anyone in your current competitive set. That exercise tends to surface competitors that the standard tools miss, and it often reveals ranking gaps that matter more commercially than the ones you’re already tracking.
Tools like Hotjar’s survey functionality can be useful here. Asking customers directly how they found you, what else they considered, and what nearly made them choose differently produces competitive intelligence that no ranking report generates. It’s qualitative, it’s messy, and it’s often more accurate than the quantitative alternative.
When Improving Your Competitor Ranking Is the Right Priority
Not every ranking gap is worth closing. This sounds obvious but in practice it’s a difficult position to hold, particularly when leadership is watching the numbers and the competitor in second place is visibly gaining ground.
The case for prioritising ranking improvement is strongest when three conditions are met. First, the ranking dimension in question has a demonstrated connection to commercial outcomes in your specific category. Second, the gap between your position and the competitor’s position is large enough to be affecting customer acquisition or retention in a measurable way. Third, you have a credible path to closing the gap that doesn’t require more resource than the commercial return justifies.
If those three conditions aren’t met, improving the ranking is likely to be activity rather than strategy. It will consume time and budget and produce a better number without producing a better business.
I’ve judged the Effie Awards, which recognise marketing effectiveness rather than creative execution. The entries that stand out are not the ones that moved a ranking metric. They’re the ones that connected a clear commercial problem to a specific marketing response and then demonstrated, with evidence, that the response worked. Competitor rankings feature in some of those entries, but they feature as context, not as the goal. The goal is always a business outcome. The ranking is one indicator of whether you’re moving toward it.
There’s also a resource allocation question embedded here. Closing a ranking gap in a highly competitive space is expensive. The question is whether that resource is better deployed improving your ranking or strengthening something else that drives commercial performance, product quality, customer experience, pricing architecture, or distribution reach. Rankings are one lever. They’re not the only one, and they’re rarely the cheapest one to pull.
Making Competitor Rankings Useful in Practice
The practical challenge with competitor rankings is turning them from a reporting artefact into something that informs decisions. Most teams produce ranking reports regularly. Fewer teams use them well.
A few things that tend to make the difference. First, agree on which ranking dimensions matter for your specific commercial situation before you start measuring. Different businesses in different categories with different growth stages will have different answers. A business trying to enter a new market has different ranking priorities than a business defending an established position. Deciding what to measure after you’ve seen the data is a form of confirmation bias dressed up as analysis.
Second, build competitor ranking review into a decision cycle rather than a reporting cycle. The question at the end of a ranking review shouldn’t be “where are we?” It should be “what are we going to do differently as a result of this?” If the answer is consistently nothing, the review is costing time without producing value.
Third, treat ranking data as one input alongside others rather than as the primary measure of competitive health. Customer feedback, sales team intelligence, win/loss analysis, and direct observation of competitor behaviour all produce competitive insight that ranking reports don’t capture. Behavioural tracking tools can show you how visitors interact with your site relative to what you know about competitor experiences, which adds a layer of insight that pure ranking data misses.
The teams I’ve seen use competitive intelligence most effectively are the ones that treat it as a continuous practice rather than a quarterly exercise. They’re watching the market between formal review cycles, updating their competitive set when something changes, and asking commercial questions rather than positional ones. That discipline doesn’t require more tools. It requires a different habit of mind.
For a broader view of how competitive ranking fits into the wider practice of market research, the Market Research and Competitive Intel hub covers the full range of methodologies, from primary research through to share of voice analysis and competitive positioning frameworks.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
