Web Page Rankings: What the Numbers Are Telling You
Checking web page rankings means finding out where specific pages on your site appear in search engine results for target keywords. Most marketers do it. Far fewer know what to do with what they find.
Rankings are a signal, not a verdict. A page sitting in position 8 might be doing more commercial work than a page in position 2, depending on click-through rates, search intent alignment, and what happens after the click. The number on its own tells you almost nothing. The context around it tells you everything.
Key Takeaways
- Rankings are a directional signal. Without click-through rate, search volume, and conversion context, a position number is just a number.
- Tools like Google Search Console, Semrush, and Ahrefs each give you a different slice of reality. None gives you the full picture on its own.
- Ranking for the wrong keywords is a common and expensive problem. Traffic that doesn’t convert isn’t an asset.
- Pages stalling between positions 5 and 15 are often the highest-leverage opportunity in a site’s organic search performance.
- The goal is not to rank. The goal is to generate demand that turns into revenue. Rankings are one step in that chain, not the destination.
In This Article
- Why Most Marketers Check Rankings the Wrong Way
- How to Check Web Page Rankings: The Tools That Matter
- Google Search Console
- Semrush and Ahrefs
- What Ranking Data Is Actually Telling You
- The Keyword Fit Problem
- Ranking Volatility: When to Worry and When to Ignore It
- Connecting Rankings to Business Outcomes
- A Practical Ranking Review Process
- The Competitive Dimension
- What Good Ranking Practice Actually Looks Like
Why Most Marketers Check Rankings the Wrong Way
When I was running an agency, I had a client who called every Monday morning to ask where they ranked for their primary keyword. Not what traffic looked like. Not what revenue was doing. Just the ranking. Position 4 one week, position 6 the next, and they were in a state. Position 3 and they were ecstatic. The keyword had 1,200 monthly searches and almost no commercial intent. We were burning real time managing their emotional response to a number that was barely connected to their business outcomes.
This is more common than most agencies will admit. Rankings are visible, easy to screenshot, and satisfying to report. They create the appearance of progress. But ranking position is a proxy metric, and like all proxy metrics, it only matters if it’s genuinely correlated with the thing you actually care about.
The better question isn’t “where do we rank?” It’s “are the pages that rank driving meaningful traffic, and is that traffic doing anything useful once it arrives?”
How to Check Web Page Rankings: The Tools That Matter
There are three tools I’d put in front of any marketing team doing serious ranking work. Each has a different strength, and using them in combination gives you a more honest read than relying on any single source.
Google Search Console
This is the first place to look, and it’s free. Search Console shows you the queries your pages are actually ranking for, the average position, the impressions, and the click-through rate. The average position figure is a mean across all the positions Google served your page in during the selected period, which means it can look better than reality if you rank well for low-volume queries and poorly for high-volume ones. You need to look at the data by query, not just the aggregate.
What Search Console does that no third-party tool can replicate is show you real query data from Google’s own index. It also shows you impressions, which is critical. A page with 5,000 impressions and a 2% click-through rate has a very different problem than a page with 200 impressions and a 15% click-through rate. The first needs a better title tag or meta description. The second probably needs more links and authority before it moves.
The performance report filtered by page is where most of the useful work happens. Pull a specific URL, look at the queries it ranks for, sort by impressions descending, and you’ll quickly see whether the page is ranking for what you intended, or whether it’s drifting toward tangential queries that don’t serve your commercial objectives.
Semrush and Ahrefs
Both of these tools crawl search results independently and build their own databases of ranking data. They’re useful for competitive intelligence, keyword tracking, and spotting opportunities Search Console won’t surface because you’re not yet ranking for them.
Semrush’s Position Tracking tool lets you monitor specific keywords over time, track competitors side by side, and see visibility scores across a keyword set. Ahrefs does similar work with its Rank Tracker. Neither is perfectly accurate, and both will show you slightly different numbers than Search Console for the same page. That’s not a flaw, it’s the nature of how they work. They sample data differently and update at different intervals.
I’ve seen marketers get into arguments about which tool is “right.” That’s the wrong argument. These tools are a perspective on reality, not reality itself. Use them to spot trends and patterns, not to obsess over individual position fluctuations. A page moving from position 7 to position 9 in Semrush’s tracker is not a crisis. A page that has lost 40% of its impressions in Search Console over 90 days is worth investigating.
If you’re looking at growth hacking tools and broader organic growth frameworks, Semrush’s own breakdown of growth tools is worth a read for context on how ranking data fits into a wider growth stack.
What Ranking Data Is Actually Telling You
Here’s where most ranking analysis goes wrong. Marketers pull a report, see where they rank, and then either celebrate or panic. What they rarely do is ask why the page ranks where it does, and what that implies about what to do next.
Rankings are an output of several inputs: the quality and relevance of the content, the authority of the page and domain, the technical health of the page, and how well the page matches search intent. When a page isn’t ranking where you want it to, one or more of these inputs is underperforming. The ranking position is the symptom. The diagnosis requires looking at the inputs.
There’s a useful diagnostic framework here. Sort your tracked pages into three buckets. Pages ranking in positions 1 to 3: protect and monitor. Pages ranking in positions 4 to 15: highest leverage, most likely to move with targeted effort. Pages ranking below 15: assess whether the keyword is worth pursuing at all before investing further.
The middle bucket is where most of the commercial opportunity sits. A page moving from position 8 to position 3 for a keyword with meaningful volume can double or triple organic traffic to that page. That’s a different return on effort than trying to push a page from position 22 to position 15.
This connects to something I observed repeatedly when I was growing the agency from around 20 people to over 100. Early on, we chased rankings across too many keywords for too many clients, treating all positions as equally worth fighting for. The teams that got real traction were the ones who learned to prioritise ruthlessly, focusing effort on the pages closest to the visibility threshold where traffic actually materialises. Spreading effort thin across 200 keywords produces worse outcomes than concentrating it on the 20 that matter most.
The Keyword Fit Problem
Ranking for the wrong keywords is one of the most common and least discussed problems in organic search. A page can rank well and still be a commercial failure if it’s attracting the wrong audience.
I spent a number of years earlier in my career overvaluing lower-funnel signals. Traffic that looked qualified on paper, keywords that seemed relevant, rankings that looked solid in reports. What I came to understand over time is that a lot of what gets credited to organic search performance was going to happen anyway. Someone who already knew the brand, already had intent, would have found their way regardless. Real growth comes from reaching people who weren’t already looking for you. That requires ranking for keywords that put you in front of new audiences, not just capturing existing demand.
This is why keyword intent analysis matters as much as position. A page ranking in position 2 for an informational query from someone who will never buy is less valuable than a page ranking in position 6 for a transactional query from someone who is actively evaluating options. The ranking check needs to include an intent check.
Understanding how ranking strategy connects to broader go-to-market thinking is something I cover in more depth in the Go-To-Market and Growth Strategy hub. Organic search doesn’t exist in isolation from the rest of your commercial strategy, and treating it as a standalone channel is one of the reasons ranking improvements often fail to show up in revenue.
Ranking Volatility: When to Worry and When to Ignore It
Search rankings fluctuate. This is normal. Google runs continuous experiments, adjusts its algorithms regularly, and personalises results based on location, device, and search history. A page that ranks position 4 on a desktop browser in one city may rank position 7 on mobile in another. The tools that track rankings average this out, which means day-to-day movements of one or two positions are almost always noise.
What’s worth paying attention to is sustained directional movement over weeks, not daily fluctuations. A page that has drifted from position 5 to position 12 over six weeks is telling you something. A page that bounced between 4 and 6 over the same period is not.
Significant drops across multiple pages simultaneously usually point to one of three things: a broad algorithm update, a technical issue affecting crawlability or indexing, or a competitor who has substantially improved their content in that space. Each requires a different response. Algorithm updates often self-correct if your content is genuinely strong. Technical issues need to be fixed immediately. Competitive improvements require you to raise the quality of your own content.
One thing I’d caution against is the reflexive panic response to ranking drops that I’ve seen in too many client meetings. A drop in rankings is data. It’s not a verdict. The appropriate response is to investigate before acting, not to immediately commission a content rewrite or a link-building campaign. Misdiagnosing the cause of a ranking drop and applying the wrong fix wastes time and can make things worse.
Connecting Rankings to Business Outcomes
The question that should sit above all ranking analysis is: what does this mean for the business? Not what does it mean for the channel, not what does it mean for the SEO score, but what does it mean for revenue, pipeline, or whatever commercial metric the business actually cares about.
This requires connecting ranking data to traffic data, traffic data to conversion data, and conversion data to revenue data. Most marketing teams have the data to do this. Fewer actually do it, because it requires joining up systems that often sit in different platforms managed by different people.
When I judged the Effie Awards, one of the things that consistently separated the entries that won from the ones that didn’t was the quality of the measurement chain. The winning cases could trace a clear line from activity to outcome. Not perfectly, not with false precision, but with honest approximation. The entries that struggled were the ones where the evidence chain broke down somewhere between the channel metric and the business result. Rankings are a channel metric. They need to connect to something downstream before they mean anything commercially.
The Vidyard piece on why go-to-market feels harder touches on something relevant here: the increasing fragmentation of the buyer experience means that attributing outcomes to specific channels, including organic search, is genuinely difficult. That’s not an excuse to stop trying. It’s a reason to build better measurement frameworks rather than relying on single-channel metrics like rankings as proxies for success.
BCG’s work on go-to-market strategy makes a related point about the need for commercial alignment across functions. Organic search strategy is not exempt from this. Rankings need to be in service of commercial objectives, not a parallel activity running on its own logic.
A Practical Ranking Review Process
If you want a process that produces useful decisions rather than just interesting data, here’s how I’d structure a ranking review.
Start with Search Console. Pull the last 90 days of performance data. Filter by page, sort by impressions. Identify the top 20 pages by impressions and look at their average position and click-through rate. Flag any page where impressions are high but click-through rate is below 3%, because that’s usually a title tag or meta description problem, not a ranking problem. Flag any page where average position has moved more than 5 places in either direction over the period.
Then go to your rank tracker, whether that’s Semrush, Ahrefs, or another tool. Look at the same pages and cross-reference the position data. Don’t expect the numbers to match exactly. Look for alignment in the direction of movement. If both tools show a page declining, that’s more reliable than one tool showing a drop and the other showing stability.
Next, sort your keyword set by commercial intent. Which keywords are people searching when they’re close to a decision? Which are they searching when they’re just learning? Make sure your highest-intent keywords are getting the most attention in your ranking review. A page ranking position 12 for a high-intent keyword is more worth working on than a page ranking position 4 for a low-intent one.
Finally, connect the ranking data to your conversion data. Which pages that rank well are also converting? Which are ranking well but converting poorly? The second group is telling you there’s a disconnect between what the search query promises and what the page delivers. That’s a content alignment problem, not a ranking problem.
This kind of structured approach to ranking analysis is part of a broader discipline around growth strategy. If you want to think about how organic search fits into your wider commercial model, the Go-To-Market and Growth Strategy hub covers the frameworks that connect channel-level activity to business-level outcomes.
The Competitive Dimension
Rankings don’t exist in isolation. They’re relative positions in a competitive landscape that changes constantly. A page that holds position 3 for two years is not a static achievement. It’s the result of continuous competition with other pages that are also trying to rank for the same query.
Monitoring competitors’ rankings is a legitimate and useful activity, but it needs to be done with the right frame. The question is not “are they ranking higher than us?” The question is “are they ranking higher than us for keywords that matter to our business, and if so, why?” That second question leads to actionable analysis. The first leads to competitive anxiety that rarely produces good strategic decisions.
When a competitor outranks you for an important keyword, the useful investigation is into what their page does that yours doesn’t. Is it more comprehensive? Does it match search intent more precisely? Does it have more authoritative links pointing to it? Does it load faster or provide a better experience on mobile? These are diagnosable questions with actionable answers.
What I’ve found over the years is that competitive ranking gaps are rarely mysterious. They usually come down to content quality, authority, or technical performance. The marketers who close those gaps fastest are the ones who diagnose accurately and fix the right thing, rather than throwing generic SEO activity at the problem and hoping something sticks.
For context on how growth-focused teams are thinking about competitive positioning more broadly, Semrush’s examples of growth hacking in practice shows how some teams approach competitive differentiation across channels, including organic search.
What Good Ranking Practice Actually Looks Like
Good ranking practice is quiet, consistent, and commercially grounded. It doesn’t involve obsessing over daily position changes. It doesn’t involve chasing rankings for keywords that don’t connect to business objectives. And it doesn’t involve treating position 1 as the only acceptable outcome for every keyword you target.
It involves knowing which pages matter most to your commercial objectives, monitoring them with appropriate frequency (weekly or fortnightly for most businesses, not daily), and connecting what you observe in the rankings to what’s happening in your traffic and conversion data.
It also involves being honest about what rankings can and can’t tell you. They’re a useful signal in a broader measurement framework. They’re not a substitute for one. A business that ranks well but isn’t growing has a different problem than an SEO problem. And a business that ranks poorly for its most important keywords has a specific, diagnosable, fixable problem that’s worth addressing with precision.
The Forrester model of intelligent growth is relevant here. Growth that compounds over time comes from making better decisions with better information, not from optimising isolated metrics in isolation from each other. Rankings are one input into a system. Treat them accordingly.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
