Ranking Tool: How to Prioritise Markets Before You Spend

A ranking tool for go-to-market strategy is a structured framework that scores and prioritises potential markets, segments, or opportunities against a defined set of commercial criteria. It turns subjective debate into a defensible, comparable decision. Done properly, it tells you where to focus first, where to wait, and where to walk away.

Most teams skip this step. They pick markets based on familiarity, loudest voice in the room, or whatever the last customer request was. Then they wonder why growth stalls six months in.

Key Takeaways

  • A ranking tool forces explicit trade-offs before budget is committed, not after it is spent.
  • The criteria you weight matter more than the tool itself. Wrong inputs produce confidently wrong outputs.
  • Market attractiveness and competitive position are the two axes that matter most. Everything else is context.
  • Ranking tools work best when they surface disagreement, not suppress it. The conversation is the value, not just the final score.
  • Revisit your rankings quarterly. A market that scored low in January can look completely different by Q3.

I have run this exercise with leadership teams across financial services, retail, FMCG, and technology. The output is almost never surprising. What surprises people is seeing their assumptions written down and scored, because that is when the disagreements surface. And surfacing disagreement before you spend is exactly the point.

Why Most Go-To-Market Decisions Are Made Backwards

The typical sequence goes like this: someone identifies an opportunity, makes a case for it, and the team debates whether to pursue it. Budget gets allocated. Six months later, the results are underwhelming, and the post-mortem reveals that nobody properly assessed the competitive intensity, the cost to acquire customers, or whether the segment was actually large enough to justify the investment.

I watched this pattern play out repeatedly in agency life. A client would arrive with a new geography or vertical in mind, and the instinct was to start building the plan. Decks got made. Media got planned. The ranking conversation, if it happened at all, came after the direction had already been set. By that point, it was not really a ranking exercise. It was a rationalisation exercise.

This is not a failure of intelligence. It is a failure of process. Most organisations do not have a structured way to compare opportunities against each other before committing. They have opinions, spreadsheets, and whoever shouts loudest. A ranking tool replaces that with something repeatable.

BCG’s work on commercial transformation and go-to-market strategy makes the same point: the companies that grow consistently are the ones that have disciplined frameworks for deciding where to compete, not just how to compete. Sequencing matters. Prioritisation matters. Enthusiasm without structure is just expensive.

What a Ranking Tool Actually Is

At its core, a ranking tool is a weighted scoring matrix. You define the criteria that matter for your business, assign a weight to each one, score every opportunity against those criteria, and let the maths surface the priority order. The tool itself can live in a spreadsheet, a slide, or a purpose-built platform. The format is less important than the rigour of the inputs.

The two dimensions that anchor most market ranking frameworks are market attractiveness and competitive position. Market attractiveness covers things like total addressable market, growth rate, margin potential, regulatory complexity, and customer acquisition cost. Competitive position covers your relative strengths in that market: brand awareness, distribution capability, product fit, and existing relationships.

These two axes map directly onto the classic GE-McKinsey matrix, which has been used in portfolio strategy for decades. The logic holds: you want to invest heavily where the market is attractive and your position is strong, move cautiously where one of those is weak, and avoid or exit where both are weak.

Where most teams go wrong is adding too many criteria. I have seen ranking tools with 20 or 30 variables. By the time you score everything, the weights have diluted each other to the point where the output is basically random. Keep it to six to ten criteria. If a factor genuinely matters, it will show up in one of those. If it does not fit, it is probably not a decision variable, it is context.

If you are thinking about how a ranking tool fits into your broader approach to market selection and growth planning, the Go-To-Market and Growth Strategy hub covers the wider framework in more depth.

How to Build the Criteria Set That Actually Matters

The criteria you choose are the most important decision in this process. Get them right and the tool does useful work. Get them wrong and you have a scoring system that produces confident-looking nonsense.

Start with the business model. A SaaS business with a high customer lifetime value and a long sales cycle will weight different factors than a direct-to-consumer brand with a 30-day payback window. The criteria need to reflect how your business actually makes money, not some generic template.

For most businesses, a workable criteria set covers five areas. First, market size and growth: is this segment large enough to matter, and is it expanding or contracting? Second, competitive intensity: how many credible players are already there, and how entrenched are they? Third, strategic fit: does this market play to your existing strengths in product, distribution, or brand? Fourth, cost to win: what does it realistically cost to acquire a customer here, and what does that do to unit economics? Fifth, time to revenue: how long before you see a return, and can the business absorb that timeline?

The weighting is where the strategic conversation happens. If speed to revenue matters more than market size right now, your weights should reflect that. If you are in a growth phase and can absorb a longer payback, weight strategic fit and market size more heavily. The weights are not permanent. They should change as your business situation changes.

Semrush’s writing on market penetration strategy is useful here for thinking about how competitive intensity and market share interact when you are deciding whether to push harder into existing markets versus entering new ones. That trade-off belongs in your criteria set.

The Scoring Process: Where Rigour Meets Reality

Once you have your criteria and weights, scoring each opportunity should be a structured conversation, not a solo exercise. The value of doing this with a cross-functional group is not efficiency. It is the disagreements it surfaces.

I remember a session with a retail client where the commercial director and the marketing director scored the same market completely differently on competitive intensity. The commercial director was thinking about direct competitors. The marketing director was thinking about share of voice and paid search costs. Both were right. Neither had been explicit about their frame of reference until we put numbers on the table.

That conversation saved them from a market entry that looked attractive on paper but had a cost-to-acquire problem that the commercial model could not absorb. The ranking tool did not make that decision. It created the conditions for the right conversation to happen.

Use a simple 1-to-5 or 1-to-10 scale. Avoid half-points in early sessions because they create false precision and slow the process down. Once everyone has scored independently, compare scores before discussing. The gaps are where the insight lives.

For each criterion, document the reasoning behind the score, not just the number. This matters because the tool will be revisited, and six months from now you need to understand why a market scored a 3 on competitive intensity, not just that it did. That audit trail is what makes the tool useful over time rather than just once.

What the Output Should Tell You

A ranking tool produces a priority order. But a priority order is not a strategy. The output tells you where to look first. It does not tell you how to win.

The markets that score highest on your matrix are the ones that warrant deeper investigation: customer research, competitive analysis, channel testing, unit economics modelling. The ranking narrows the field. The work that follows fills in the picture.

Markets that score in the middle are often the most interesting. They have something going for them but a clear constraint. Understanding that constraint tells you whether it is solvable. A market with strong attractiveness but weak competitive position might be worth entering if you can build a differentiated position quickly. A market with strong competitive position but limited growth potential might be worth defending but not expanding into.

Markets that score low should not automatically be discarded. Sometimes a low score reflects a timing issue rather than a structural one. A market that is too small today might be the right size in two years. Flag it, set a review date, and move on. Do not let low-scoring markets consume energy, but do not delete them from the list either.

Forrester’s intelligent growth model makes a useful distinction between markets where you are harvesting existing demand and markets where you are creating new demand. That distinction should inform how you interpret your ranking scores. A market that scores well on attractiveness might still require significant demand creation investment before it converts, and that changes the resource calculation entirely.

The Demand Creation Problem Most Ranking Tools Miss

Earlier in my career I put too much weight on lower-funnel performance signals when assessing market opportunity. If a market was generating clicks and conversions, it looked attractive. What I did not account for was that a lot of that performance was capturing demand that already existed, not creating new demand. The market looked healthy because we were good at harvesting intent. It was not actually growing.

The implication for ranking tools is this: market attractiveness should not be assessed purely on current conversion rates or existing search volume. Those metrics tell you about demand that is already formed. They do not tell you about the ceiling, or about how much of the market is still unreached.

Think about it like a clothes shop. The customer who has already tried something on is far more likely to buy than the one who has just walked through the door. But if you only measure the people at the till, you will systematically underestimate how much of the market has never engaged with you at all. A ranking tool that only looks at existing intent is making the same mistake.

Build in a criterion for untapped potential, not just current demand. Estimate what proportion of the total addressable market has never engaged with your category, not just your brand. That number changes how you weight a market’s attractiveness significantly.

This is particularly relevant for businesses entering new geographies or verticals. Hotjar’s work on growth loops and feedback cycles is a useful lens here: in markets where the growth loop has not yet been established, the investment required to build awareness and consideration is substantially higher than in markets where the category is already understood. Your ranking tool needs to account for that.

Common Mistakes That Undermine the Tool

The first mistake is letting the tool become a post-hoc justification. This happens when someone has already decided which market they want to enter and reverse-engineers the weights to produce the answer they want. It is more common than people admit. The solution is to agree on the criteria and weights before you score any specific market. Once the framework is set, it applies to everything equally.

The second mistake is treating the output as permanent. A ranking tool is a snapshot. Markets change. Competitors enter or exit. Regulatory environments shift. Consumer behaviour evolves. I have seen businesses make excellent market entry decisions based on a ranking exercise and then fail to revisit the rankings as conditions changed. Build a quarterly review into the process from the start.

The third mistake is scoring on aspiration rather than evidence. It is tempting to score a market highly on strategic fit because it aligns with where you want to go, rather than where you actually have capability today. Be honest about current state. The tool is only as useful as its inputs are accurate.

The fourth mistake is using the ranking tool as a substitute for customer insight. Scores on a matrix are not the same as talking to people in a market. Before you commit significant resource to a high-ranking market, get qualitative validation. The ranking narrows the field. Customer research confirms or challenges the hypothesis.

BCG’s perspective on scaling agile commercial approaches is relevant here: the organisations that scale well are the ones that build feedback loops into their decision-making processes. A ranking tool without a feedback mechanism is a one-time exercise. A ranking tool with a structured review process becomes a genuine strategic asset.

When to Use a Ranking Tool and When Not To

A ranking tool is most valuable when you have multiple genuine options and limited resource. If you are choosing between three potential market entries with a fixed budget, the tool gives you a structured way to make that call. If you have one obvious opportunity and the question is just how to execute it, the tool adds less value.

It is also valuable when the team is misaligned. I have used ranking exercises specifically to surface disagreement rather than resolve it. When a leadership team cannot agree on which markets to prioritise, putting scores on a shared framework often reveals that the disagreement is not about the markets themselves but about underlying assumptions: about competitive strength, about the cost to acquire customers, about how long the business can sustain investment before it needs a return. Making those assumptions explicit is often more valuable than the final ranking.

Where ranking tools are less useful is in highly dynamic environments where conditions change faster than the tool can be updated. In those contexts, a lighter-touch prioritisation framework, updated frequently, is more practical than a detailed weighted matrix that takes two weeks to build and is out of date by the time it is finished.

For creator-led go-to-market approaches, where speed and cultural fit matter more than structural analysis, the ranking criteria shift significantly. Later’s thinking on going to market with creators highlights how audience alignment and creator authenticity become the primary variables, which a traditional market ranking tool is not well-suited to measure. Know the limits of the framework.

Building a Ranking Tool Your Team Will Actually Use

The best ranking tool is the one that gets used consistently, not the most sophisticated one. I have seen beautifully built scoring models sit untouched because they required too much data to populate or too much time to run. Simplicity is a feature, not a compromise.

Start with a one-page template. Define six criteria. Assign weights that sum to 100. Score each opportunity from one to five. Multiply score by weight. Sum the weighted scores. Rank the outputs. That is the whole model. You can build it in 20 minutes. The value is not in the complexity of the tool. It is in the quality of the conversation it generates.

When I was growing a team and managing an increasingly complex client portfolio, the frameworks that survived were always the simple ones. The elaborate models got used once and then abandoned. The one-pagers got pulled out every quarter. Discipline comes from simplicity. If the tool is painful to use, people will find reasons not to use it.

Document the output clearly. For each market assessed, record the score, the key reasons behind the score, the assumptions made, and the next review date. Keep it in a shared location. Make it a living document, not a one-time deliverable.

Forrester’s healthcare go-to-market research on device and diagnostics market challenges is a good example of how sector-specific constraints, regulatory hurdles, procurement complexity, reimbursement pathways, need to be built into the criteria set. Generic frameworks miss industry-specific variables that can completely change a market’s attractiveness score. Adapt the template to your context.

The Conversation the Tool Creates Is the Point

I have run ranking exercises that produced outputs nobody was surprised by. The highest-scoring market was the one most people suspected was the right call. But the exercise was still worth doing, because it replaced “I think we should go here” with “here is why, scored against the criteria we agreed matter.” That shift, from opinion to structured reasoning, changes how decisions get made and how they get defended.

Early in my career I would have found that kind of process slow. I wanted to move fast, back the instinct, get into execution. What 20 years of watching businesses succeed and fail has taught me is that the ones that move fast without a clear prioritisation framework tend to spread themselves thin, enter markets they cannot win, and burn resource on opportunities that looked good but were not structurally sound.

The ranking tool is not about slowing down. It is about making sure the speed is pointed in the right direction. You can move quickly and still be disciplined about where you are going. Those two things are not in conflict.

If you want to think about how market ranking connects to the broader questions of growth strategy, resource allocation, and commercial planning, the Go-To-Market and Growth Strategy hub is a good place to continue that thinking.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a ranking tool in go-to-market strategy?
A ranking tool in go-to-market strategy is a weighted scoring framework that evaluates potential markets, segments, or opportunities against a defined set of commercial criteria. It converts subjective debate into a structured, comparable output so teams can prioritise where to invest before committing budget.
What criteria should a market ranking tool include?
The most useful criteria cover market size and growth rate, competitive intensity, strategic fit with your existing capabilities, cost to acquire customers, and time to revenue. Limit the criteria set to six to ten variables. More than that dilutes the weighting and reduces the tool’s ability to differentiate between options clearly.
How do you weight criteria in a market ranking framework?
Weights should reflect your current business priorities, not a generic template. If speed to revenue matters most right now, weight time to revenue heavily. If you are in a growth phase with patient capital, weight market size and strategic fit more. Agree on weights before scoring any specific market to prevent the tool from being reverse-engineered to support a predetermined conclusion.
How often should a market ranking tool be updated?
Quarterly is a practical starting point for most businesses. Markets change, competitors enter and exit, and your own capabilities evolve. A ranking that was accurate in Q1 may not reflect reality by Q3. Build a review date into the original output and treat the tool as a living document rather than a one-time deliverable.
What is the difference between a ranking tool and a market sizing exercise?
A market sizing exercise estimates the total addressable opportunity in a single market. A ranking tool compares multiple markets or opportunities against each other using a broader set of criteria that includes, but is not limited to, size. Ranking tools are decision-making frameworks. Market sizing is one input into that decision, not the decision itself.

Similar Posts