ICP Scoring for B2B Sales: Stop Chasing the Wrong Accounts
ICP scoring is a method for ranking prospective accounts against a defined ideal customer profile, assigning weighted scores to firmographic, technographic, and behavioural signals to separate high-fit prospects from low-fit noise. Done well, it tells your sales team where to spend their time. Done poorly, it creates a false sense of rigour while the pipeline fills with accounts that will never close.
Most B2B organisations have an ICP in some form. Fewer have a scoring methodology that actually changes sales behaviour. This article covers how to build one that does.
Key Takeaways
- ICP scoring only creates value when the criteria are built from closed-won data, not internal assumptions about who should buy.
- Firmographic fit is table stakes. Behavioural and technographic signals are where scoring models separate good accounts from great ones.
- Weighting matters more than the number of criteria. A model with 6 well-weighted signals outperforms one with 20 equally weighted fields.
- Scores should be dynamic, not static. An account that scored low six months ago may be a high-fit prospect today based on new signals.
- ICP scoring fails when sales and marketing build it in isolation. The model needs to reflect how deals actually close, not how marketing thinks they should.
In This Article
- What Is an ICP Scoring Methodology and Why Does It Matter?
- How Do You Define the Right ICP Criteria?
- How Should You Weight ICP Scoring Criteria?
- How Do You Build the Scoring Model in Practice?
- What Data Sources Feed an ICP Scoring Model?
- How Does ICP Scoring Connect to Sales and Marketing Execution?
- How Do You Keep ICP Scores Current?
- What Are the Most Common ICP Scoring Mistakes?
- How Does ICP Scoring Fit Into a Broader GTM Audit?
I have spent 20 years watching B2B organisations waste significant sales capacity on the wrong accounts. Not because the salespeople were bad, but because nobody had done the upstream work to define what good actually looked like. When I was running agencies and managing growth strategy across 30-plus industries, the single biggest lever I saw underused was account prioritisation. Not a new channel, not a new message, just a cleaner answer to the question: which accounts are worth pursuing?
What Is an ICP Scoring Methodology and Why Does It Matter?
An ICP scoring methodology is a structured framework that assigns numerical scores to prospect accounts based on how closely they match your ideal customer profile. Each criterion carries a weight, and the combined score produces a ranking that helps sales and marketing prioritise outreach, allocate budget, and make faster qualification decisions.
The reason it matters is straightforward. B2B sales cycles are long and expensive. Every hour a salesperson spends on a low-fit account is an hour not spent on a high-fit one. Scoring creates a systematic way to make that distinction, rather than leaving it to individual judgement or whoever happened to respond to an email.
The broader context for this sits within go-to-market strategy. If you are building or refining your GTM approach, the Go-To-Market and Growth Strategy hub covers the full landscape, from market entry decisions to channel mix and revenue operations. ICP scoring is one component of that system, and it only performs well when it connects to the rest of the GTM infrastructure.
How Do You Define the Right ICP Criteria?
The most common mistake in ICP design is starting with assumptions rather than data. Teams sit in a room and describe the customer they want, not the customer who actually buys, renews, and expands. Those two things are often different.
Start with your closed-won data from the last 18 to 24 months. Pull every account that closed, and then filter for the ones that performed well post-sale: low churn, high expansion revenue, short time-to-value, strong NPS. These are your true best customers, not just the ones who signed.
From that cohort, identify the common attributes. You are looking across four main dimensions:
- Firmographic: Company size by revenue and headcount, industry vertical, geography, business model (SaaS, services, manufacturing, etc.), growth stage.
- Technographic: The technology stack they run. What tools they use often predicts both fit and purchasing behaviour. A company running Salesforce, Marketo, and Gong has a very different profile from one running spreadsheets and a legacy CRM.
- Situational: Signals that indicate the account is in-market or approaching a buying trigger. Recent funding rounds, leadership changes, new product launches, hiring patterns, regulatory changes in their sector.
- Behavioural: How they have engaged with your brand. Website visits, content downloads, webinar attendance, email engagement, social interactions with your content.
Each of these dimensions contributes to the score, but they do not contribute equally. That is where weighting comes in.
How Should You Weight ICP Scoring Criteria?
Weighting is where most scoring models fall apart. Teams either weight everything equally, which means the model has no opinion, or they weight things based on gut feel, which means the model reflects bias rather than evidence.
The right approach is to weight criteria based on their correlation with closed-won outcomes. If your analysis shows that companies with 200 to 500 employees in financial services close at 3x the rate of companies outside that band, that criterion deserves a higher weight. If technographic fit (say, running a specific CRM) correlates strongly with deal velocity, weight it accordingly.
A practical weighting structure for most B2B scoring models looks something like this:
- Firmographic fit: 30 to 40 percent of total score. This is your baseline filter. If the firmographics are wrong, the rest rarely matters.
- Technographic fit: 15 to 25 percent. Increasingly important as the number of integrations and tech dependencies in B2B software grows.
- Situational signals: 20 to 30 percent. Buying triggers are high-value indicators. An account that just raised a Series B or hired a new CRO is a fundamentally different prospect than the same account six months earlier.
- Behavioural engagement: 15 to 25 percent. Intent signals from your own channels are strong predictors of near-term pipeline.
These ranges are starting points, not rules. Your model should reflect your actual sales data, not a generic framework. The point is that the weights should be deliberate and evidence-based.
There is a parallel here to something I have thought about a lot in performance marketing. Early in my career, I overvalued lower-funnel signals. I attributed too much credit to the last touchpoint and not enough to the upstream work that created the opportunity. ICP scoring can fall into the same trap, over-weighting behavioural engagement because it is measurable and recent, while under-weighting the firmographic and situational factors that actually predicted the deal. The model needs to reflect the full picture, not just what is easiest to see.
How Do You Build the Scoring Model in Practice?
Once you have your criteria and weights, building the model is a matter of turning qualitative judgements into quantifiable inputs. Here is a workable approach:
Step 1: Define your tiers. Most models use three tiers: Tier 1 (high fit, prioritise for outbound and ABM), Tier 2 (moderate fit, include in nurture and targeted campaigns), Tier 3 (low fit, deprioritise or exclude). Some organisations add a Tier 0 for accounts that are explicitly disqualified.
Step 2: Score each criterion on a consistent scale. A 0 to 10 scale works well. For binary criteria (they have the technology or they do not), use 0 or 10. For range-based criteria (company size), use a graduated scale: 0 for no fit, 5 for partial fit, 10 for strong fit.
Step 3: Apply weights and calculate composite scores. Multiply each criterion score by its weight and sum the results. An account that scores 7 on firmographic fit (weighted at 35%) contributes 2.45 to the total. Do this across all criteria and you get a composite score out of 10.
Step 4: Map scores to tiers. Decide your thresholds. Scores of 7.5 and above might be Tier 1. Scores between 5 and 7.4 are Tier 2. Below 5 is Tier 3. These thresholds should be calibrated against your historical data, not set arbitrarily.
Step 5: Validate against your closed-won cohort. Run your best historical customers through the model. If they are not scoring as Tier 1, your model is wrong. Adjust until the model correctly classifies at least 80% of your known best accounts.
Before you build the model, it is worth doing a structured audit of your existing account data. A checklist for analysing company website signals for sales and marketing strategy can surface useful firmographic and intent data that feeds directly into scoring criteria, particularly for accounts where you have limited first-party data.
What Data Sources Feed an ICP Scoring Model?
A scoring model is only as good as the data behind it. The inputs come from several sources, and the quality of each determines how reliable the scores are.
CRM data is your foundation. Firmographic fields, deal history, conversion rates by segment, average contract value by account type. If your CRM data is incomplete or inconsistent, fix that before you build a scoring model. Garbage in, garbage out is not a cliché here, it is a practical constraint.
Intent data providers (Bombora, G2, TechTarget and similar) give you third-party signals about which accounts are actively researching topics relevant to your category. These feed the situational and behavioural scoring dimensions.
Technographic data providers (BuiltWith, Clearbit, HG Insights) tell you what technology an account runs. For B2B tech companies especially, this is often one of the strongest predictors of fit. The corporate and business unit marketing framework for B2B tech companies covers how to use this kind of data to align messaging and prioritisation across product lines.
First-party engagement data from your marketing automation platform and website analytics. Page visits, content consumption patterns, email engagement rates. This is the most reliable intent signal you have because it reflects direct interaction with your brand.
Enrichment tools (Apollo, ZoomInfo, LinkedIn Sales Navigator) fill gaps in firmographic and contact data. These are table stakes for any serious outbound programme.
The challenge is integrating these sources into a single account record without creating data hygiene problems. Most organisations manage this through their CRM or a dedicated ABM platform. The Vidyard Future Revenue Report highlights how GTM teams consistently underestimate the pipeline value sitting in accounts they already have data on, precisely because that data is fragmented across tools rather than unified into a scoring framework.
How Does ICP Scoring Connect to Sales and Marketing Execution?
A scoring model that lives in a spreadsheet and never changes sales behaviour is a waste of time. The point of scoring is to change what happens next.
For Tier 1 accounts, the playbook is account-based marketing: personalised outreach, executive engagement, custom content, dedicated sales sequences. These accounts warrant disproportionate investment because the model says the probability of return is high.
For Tier 2 accounts, the playbook is programmatic nurture: targeted digital advertising, content marketing, intent-triggered outreach when signals spike. You are maintaining presence without burning sales capacity.
For Tier 3 accounts, the question is whether they belong in your pipeline at all. Some organisations use pay per appointment lead generation models to test lower-fit segments without committing full sales resources, which is a reasonable approach when you are genuinely uncertain whether a segment has potential.
The scoring model should also inform channel decisions. High-fit accounts in a specific vertical might be better reached through endemic advertising in the publications and platforms they actually read, rather than generic programmatic display. If your Tier 1 accounts are all in a particular industry, endemic placements in that industry’s media ecosystem will outperform broad-reach digital every time.
I saw this play out clearly when I was working across financial services clients. The accounts that converted fastest were the ones where we had matched channel strategy to ICP tier, not just to budget. B2B financial services marketing has its own compliance and audience constraints, but the underlying principle holds across verticals: the channel mix should follow the account tier, not the other way around.
How Do You Keep ICP Scores Current?
Static scoring models decay. The market changes, your product evolves, new segments emerge, old ones saturate. A model built on 2022 closed-won data may not reflect what good looks like in 2025.
Build a review cadence into the process. Quarterly is the minimum for most organisations. Monthly is appropriate if you are in a fast-moving market or if your sales cycle is short enough to generate sufficient new data.
At each review, ask three questions. First, are the accounts scoring highest actually closing at higher rates? If not, the model is wrong. Second, are there patterns in recent wins that are not captured by the current criteria? New signals worth adding. Third, have any previously low-scoring segments started to perform? Markets shift, and your model needs to reflect that.
Dynamic scoring also means updating individual account scores as new signals come in. An account that just hired a VP of Revenue Operations, raised a Series C, or started consuming your pricing content should see its score increase in near real-time, not at the next quarterly review. This requires automation, typically through your CRM or ABM platform, but the investment is worth it.
The growing complexity of GTM execution is partly a data problem. More signals, more channels, more tools, but often less clarity about which accounts to prioritise. A well-maintained scoring model cuts through that complexity by giving teams a single, evidence-based answer to the prioritisation question.
What Are the Most Common ICP Scoring Mistakes?
I have seen enough of these models built and broken to have a clear view of where they go wrong.
Building the model without sales input. Marketing builds a beautiful scoring framework, sales ignores it because it does not reflect how deals actually close. The model needs to be built with sales, not handed to them.
Over-engineering the criteria. Twenty scoring dimensions sounds thorough. In practice, it creates noise and makes the model hard to explain or act on. Six to ten well-chosen, well-weighted criteria outperform twenty mediocre ones.
Treating the model as permanent. I mentioned this above, but it is worth repeating. A scoring model is a hypothesis about what good looks like. It needs to be tested, validated, and updated as evidence accumulates.
Confusing engagement with fit. An account that downloads every piece of content you produce but is the wrong size, wrong industry, and wrong budget is not a good prospect. It is a time sink. Behavioural signals matter, but they should not override fundamental fit criteria.
Skipping the validation step. If you build a model and never check whether it correctly classifies your known best customers, you have no idea whether it works. Validation is not optional.
When I was at iProspect, growing the team from 20 to over 100 people and managing significant ad spend across a wide client base, the organisations that had the cleanest growth trajectories were the ones that had done this upstream work. They knew exactly who they were selling to, why those accounts were good fits, and how to prioritise. The ones that struggled were often chasing volume, filling the pipeline with accounts that looked like prospects but were never going to close.
How Does ICP Scoring Fit Into a Broader GTM Audit?
ICP scoring does not exist in isolation. It is one component of a broader GTM system, and its effectiveness depends on the quality of the surrounding infrastructure.
If you are conducting a serious GTM audit, digital marketing due diligence is the right starting point. It surfaces the gaps in your current approach, including whether your account prioritisation is based on evidence or assumption, and gives you a structured basis for rebuilding. ICP scoring is often one of the first things to fix because it affects everything downstream: channel allocation, content strategy, sales sequencing, budget distribution.
The BCG perspective on B2B go-to-market strategy and long-tail pricing makes a related point about how different account segments require fundamentally different commercial models, not just different messaging. ICP scoring is the mechanism that makes those distinctions actionable at scale.
Forrester’s analysis of go-to-market struggles in complex B2B categories points to account prioritisation as a consistent failure point. The organisations that struggle most are typically those that have not made the distinction between who they can sell to and who they should sell to. ICP scoring is the mechanism that makes that distinction operational.
There is also the question of how scoring connects to growth strategy more broadly. If you are working through questions of market expansion, segment prioritisation, or channel investment, the articles in the Go-To-Market and Growth Strategy section cover the full range of decisions that sit around and above ICP scoring. Scoring is a tactical tool in service of a strategic question: where should we focus to grow?
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
