B2B Lead Scoring Criteria That Sales Teams Use
B2B lead scoring criteria are the attributes and behaviours you assign point values to so your sales team knows which leads to call first and which to leave in nurture. Done well, a scoring model cuts wasted outreach, shortens sales cycles, and gives marketing a language that revenue teams respect.
The challenge is that most B2B organisations build their first scoring model on assumptions rather than data, weight the wrong signals, and then wonder why sales ignores the scores entirely. This article covers the criteria that hold up in practice, how to weight them, and where the common mistakes are made.
Key Takeaways
- Effective B2B lead scoring combines firmographic fit (who the lead is) with behavioural signals (what they have done), weighted separately and reviewed quarterly.
- Job title and seniority are the most consistently reliable demographic criteria, but they mean nothing without engagement signals to confirm intent.
- Negative scoring, subtracting points for disqualifying attributes or inactivity, is as important as positive scoring and is almost always skipped by first-time builders.
- Sales adoption is the real test of a scoring model. If reps are not using the scores, the model is wrong, not the reps.
- Lead scoring is not a one-time setup. The criteria that predicted conversion 18 months ago may not predict it today, especially in sectors where buying behaviour has shifted.
In This Article
- What Are the Two Fundamental Categories of Lead Scoring Criteria?
- Which Firmographic Criteria Carry the Most Weight in B2B Scoring?
- Which Demographic Criteria Should You Score at the Contact Level?
- What Behavioural Criteria Signal Genuine Purchase Intent?
- What Is Negative Scoring and Why Does It Matter?
- How Should You Weight Lead Scoring Criteria?
- How Does Lead Scoring Differ Across B2B Sectors?
- What Are the Most Common Lead Scoring Mistakes in B2B?
- How Do You Build a Lead Scoring Model From Scratch?
- What Does Good Lead Scoring Look Like in Practice?
Lead scoring sits inside a broader sales and marketing alignment problem that most B2B companies underinvest in. If you want the wider context on how scoring fits into the operational picture, the Sales Enablement and Alignment hub covers the full landscape, from content strategy through to pipeline measurement.
What Are the Two Fundamental Categories of Lead Scoring Criteria?
Every B2B lead scoring model, regardless of industry or deal size, is built on two categories: demographic and firmographic criteria (who the lead is) and behavioural criteria (what the lead has done). Conflating these two categories is one of the most common structural mistakes I see, and it produces scores that are either too static or too reactive.
Demographic and firmographic criteria answer the question of fit. Does this person work at a company that could realistically buy from you? Behavioural criteria answer the question of intent. Is this person actively looking, comparing, or evaluating right now? A lead can score highly on fit and show zero intent. A lead can show strong intent but be completely outside your addressable market. Neither score alone tells you much. Together, they tell you a great deal.
When I was running agency operations and we were reviewing our own pipeline health, the clearest signal of a poor scoring model was always the same: sales reps had mentally created their own informal scoring system because the official one did not reflect how deals actually closed. That is a failure of model design, not a failure of process.
Which Firmographic Criteria Carry the Most Weight in B2B Scoring?
Firmographic criteria define whether a prospect belongs to your addressable market. These are the attributes that do not change quickly and that your sales team can verify independently. The most reliable firmographic signals in B2B lead scoring are:
Company size by employee count or revenue
This is the single most common firmographic criterion and, for most B2B products, one of the most predictive. A company with 500 employees buying an enterprise software platform behaves very differently from a 12-person agency buying the same tool. Assign your highest scores to the size bands where your existing customers are concentrated, not where you wish they were.
Industry vertical
Some products sell well across many verticals. Most do not. If you have closed 70% of your deals in financial services, professional services, and technology, those three verticals should carry meaningfully higher scores than, say, hospitality or retail. This sounds obvious but I have seen plenty of models that score all industries equally because the marketing team did not want to appear to be ignoring any sector.
Sector-specific scoring is not just about where you have sold historically. It is also about where buying behaviour aligns with your sales motion. Manufacturing sales enablement, for example, involves longer evaluation cycles, more stakeholders, and different content triggers than a SaaS-native environment. The scoring criteria need to reflect that.
Geography
If your sales team only covers certain territories, a lead outside those territories should score lower regardless of how well they fit on every other dimension. This is a practical constraint that scoring models often ignore because it feels like leaving money on the table. It is not. It is protecting your sales team’s time.
Technology stack
For software and services businesses, the tools a prospect already uses can be highly predictive of fit. A company running Salesforce is a different prospect from one running a legacy CRM. If your product integrates with specific platforms, or if your service requires certain infrastructure to be in place, technology fit is worth scoring explicitly. Data providers and intent data platforms can surface this information at scale.
Which Demographic Criteria Should You Score at the Contact Level?
Firmographic data tells you about the company. Demographic data tells you about the individual. In B2B, both matter because you need the right company and the right person within it.
Job title and function
This is consistently the most reliable demographic criterion. A VP of Marketing engaging with your content is worth more than an intern doing the same, assuming you sell to senior buyers. Map your actual closed-won contacts by title and function, then weight accordingly. Be specific: “Marketing Director” and “Digital Marketing Manager” may look similar but sit at very different points in the buying decision for enterprise purchases.
Seniority level
Separate from title, seniority level (C-suite, VP, Director, Manager, Individual Contributor) gives you a quick proxy for decision-making authority. For deals with an average contract value above a certain threshold, leads below a certain seniority level are unlikely to be the economic buyer, even if they are active in the evaluation process. Score them for nurture, not for immediate sales outreach.
Department
Depending on your product, the relevant buying department can vary. A procurement platform might be evaluated by Finance, Operations, and IT simultaneously. A marketing analytics tool might be evaluated by Marketing, Data, and sometimes the CFO’s office. Score the departments that have historically been involved in your deals, and do not assume the department that first engages is the department that will sign.
What Behavioural Criteria Signal Genuine Purchase Intent?
Behavioural scoring is where most of the real predictive power lives, and also where most models go wrong. The mistake is treating all activity as equally positive. Downloading a top-of-funnel ebook is not the same signal as requesting a product demo. Visiting your pricing page twice in a week is not the same as opening a newsletter. Weight behaviour by its proximity to a purchase decision.
High-intent behavioural signals worth scoring heavily include:
- Requesting a demo or free trial
- Visiting the pricing page (especially multiple times)
- Contacting sales directly via form or phone
- Engaging with case studies or ROI calculators
- Attending a live webinar or product walkthrough
- Responding to a direct sales email
Mid-intent signals worth moderate scores include:
- Downloading a comparison guide or buyer’s guide
- Visiting solution or product pages multiple times
- Watching a product video past the halfway point
- Opening multiple emails in a sequence
- Returning to the site more than once within a short window
Low-intent signals worth small scores include:
- Downloading a top-of-funnel content asset
- Opening a single email
- Following on LinkedIn
- Reading a blog post
The distinction matters because if you weight all engagement equally, you end up routing cold contacts to sales as though they are ready to buy. Sales teams learn quickly that the scores are unreliable, and they stop using them. I have seen this pattern play out in multiple organisations. The scoring model becomes shelfware within six months of launch.
Understanding how behavioural signals differ from contextual signals is useful background here. Behavioural data reflects what someone has done. Contextual data reflects the environment in which they encountered your content. Both can inform scoring, but they work differently and should not be conflated.
What Is Negative Scoring and Why Does It Matter?
Negative scoring subtracts points from a lead’s total score when disqualifying attributes are present or when engagement drops off. It is the part of lead scoring that most organisations skip entirely, and that omission creates a specific problem: scores only ever go up, which means leads accumulate points over time regardless of whether they are still relevant.
Common negative scoring criteria in B2B include:
- Job title indicating no buying authority (student, intern, junior analyst)
- Company size outside your serviceable market
- Competitor domain email address
- Geography outside your coverage area
- No email engagement in 90 days (score decay)
- Unsubscribed from email but not yet disqualified
- Industry vertical with near-zero historical conversion
Score decay deserves particular attention. A lead that engaged heavily six months ago and has shown no activity since is not the same as a lead that engaged last week. Without time-based decay built into your model, your highest-scored leads will increasingly be people who were once interested rather than people who are currently interested. That is a pipeline quality problem dressed up as a scoring problem.
The sales enablement myths article on this site covers a related issue: the assumption that more leads in the pipeline is always better. It is not. A smaller pipeline of well-qualified leads almost always outperforms a large pipeline of poorly qualified ones, and negative scoring is one of the mechanisms that keeps pipeline quality honest.
How Should You Weight Lead Scoring Criteria?
There is no universal weighting table that works across all B2B businesses. The right weights depend on your sales motion, average deal size, sales cycle length, and the historical data you have on what actually predicts conversion. That said, there are some structural principles that hold up consistently.
First, high-intent behaviours should carry more weight than any single demographic criterion. A CMO who has visited your pricing page three times this week is a better lead than a CMO who downloaded a whitepaper six months ago. Intent beats fit when both cannot be satisfied simultaneously.
Second, your threshold score for sales handoff should be set based on your closed-won data, not on what feels right. Pull your last 50 closed deals, look at what scores those contacts had at the point of first meaningful sales engagement, and use that distribution to set your threshold. If you do not have that data yet, start tracking it now and set a provisional threshold you are willing to revise in 90 days.
Third, keep the model simple enough that a sales rep can understand why a lead scored the way it did. If your scoring model requires a data scientist to explain, it will not be trusted. Transparency drives adoption. I have seen organisations build sophisticated predictive scoring models that sales teams flatly ignored because the scores felt like a black box. A simpler model that sales understands and trusts will outperform a complex one they do not.
For context on how scoring fits into the broader pipeline architecture, the SaaS sales funnel breakdown is worth reading. The scoring thresholds you set should map directly to funnel stages, and those stages need to be defined before you can set sensible thresholds.
How Does Lead Scoring Differ Across B2B Sectors?
The criteria that matter most vary significantly by sector, and applying a generic model across very different sales environments produces unreliable scores. A few examples illustrate how much the context shapes the model.
Professional services and consulting
In professional services, the relationship dimension matters more than in product sales. Behavioural signals like attending an in-person event, requesting a conversation, or engaging with thought leadership content carry more weight than in a transactional SaaS environment. Firmographic fit is critical because scope and budget are tightly correlated with company size and sector.
Enterprise software
Enterprise software deals typically involve multiple stakeholders across IT, Finance, and the relevant business unit. Scoring at the account level (aggregating individual contact scores across a single company) is often more useful than individual contact scoring alone. A company where three separate contacts have engaged with your content in the same month is a very different signal from one contact engaging three times.
Higher education
Lead scoring in higher education has its own specific dynamics, particularly around enrolment cycles, programme type, and the distinction between prospective students and institutional buyers. The lead scoring criteria for higher education article covers this in detail, including how to handle the long consideration cycles typical of postgraduate and professional development programmes.
Manufacturing and industrial
In manufacturing, technical fit often matters more than behavioural engagement in the early stages. A prospect who downloads a technical specification sheet is showing more meaningful intent than one who reads a general industry blog post. Scoring models in this sector need to weight technical engagement signals appropriately and account for the fact that the buying process is often initiated by an engineer or technical manager rather than a commercial buyer.
What Are the Most Common Lead Scoring Mistakes in B2B?
Having seen scoring models built and broken across a range of organisations, the failure patterns are fairly consistent.
The first is building the model in marketing without sales input. Marketing tends to weight content engagement too heavily because that is what marketing can see and influence. Sales tends to know that a specific combination of company size, job title, and recent product page visit is what actually predicts a productive first conversation. Build the model together or it will not be used.
The second is treating the initial model as final. Every scoring model is a hypothesis. It should be reviewed against actual conversion data at regular intervals, at minimum quarterly in the first year. The criteria that predicted conversion when you had 200 customers may not predict it when you have 2,000, because your customer profile changes as you scale.
The third is scoring on data you do not actually have. If your CRM is missing job title data for 60% of your contacts, job title cannot be a primary scoring criterion. Build the model around the data you can reliably collect, and build data collection processes to fill the gaps over time. Scoring on incomplete data produces scores that are systematically biased toward the contacts where you happen to have better data.
The fourth is ignoring the handoff process. A lead reaching the sales threshold is only useful if there is a clear, agreed process for what happens next. How quickly does sales follow up? What is the first outreach message? What context does the sales rep receive about why the lead scored as it did? The sales enablement collateral that accompanies a scored lead matters as much as the score itself.
Early in my agency career, I watched a business invest significant effort in a lead scoring implementation that sales reps ignored within three months. The model was technically sound. The problem was that nobody had asked sales what they actually needed to prioritise their day. The scores answered a question that sales had not asked. That experience shaped how I think about any system that is supposed to change behaviour: the people whose behaviour needs to change have to be involved in designing it.
How Do You Build a Lead Scoring Model From Scratch?
If you are starting without historical conversion data, the process is straightforward even if the calibration takes time.
Start with your ideal customer profile. Define the firmographic and demographic attributes of the companies and contacts most likely to buy. If you have existing customers, analyse them. If you do not, use your best commercial judgement and commit to revising it once you have data.
Assign point values to each attribute based on its relative importance to fit and intent. A common starting structure is to use a 100-point scale, with firmographic fit accounting for roughly 40 points and behavioural engagement accounting for 60 points. The exact split depends on your business.
Set a threshold for marketing qualified lead (MQL) status. This is the score at which a lead is considered ready for sales review. Set it conservatively at first. It is better to pass fewer, better-qualified leads to sales and adjust upward than to flood sales with marginal leads and damage their trust in the system.
Build in negative scoring from day one. Do not add it later as an afterthought.
Run the model for 60 to 90 days, then review the MQLs that converted to opportunities and those that did not. Adjust weights based on what you find. Repeat.
The BCG work on competitive advantage is a useful reminder that sustainable differentiation comes from operational discipline, not from individual tactics. A well-maintained lead scoring model is an operational discipline. It compounds over time as the data improves and the model gets closer to predicting reality rather than approximating it.
One thing I would add from experience: document your assumptions when you build the model. Write down why you weighted job title the way you did, why you chose a particular threshold, why certain industries score lower. When you come back to review it six months later, you need to know what you were thinking, not just what you decided. Models that are not documented get rebuilt from scratch every time someone new touches them.
What Does Good Lead Scoring Look Like in Practice?
A well-functioning lead scoring model produces a specific operational outcome: sales reps spend more of their time on conversations that convert and less time on contacts that were never going to buy. That sounds simple. Achieving it consistently is not.
The signal that a model is working is not the sophistication of the scoring logic. It is whether sales reps look at the score before they make a call. If they do, the model has earned their trust. If they do not, something in the model is wrong, and the answer is to find out what rather than to mandate that they use it.
The broader benefits of sales enablement only materialise when the tools and processes are actually used. A scoring model that exists in the CRM but not in the rep’s daily workflow is a cost, not an asset.
There is also a useful parallel in how good content strategy works. Telling the truth clearly is the foundation of content that earns trust over time. The same principle applies to lead scoring: a model that honestly reflects what predicts conversion, rather than what marketing wants to believe predicts conversion, will earn trust from sales over time. A model built on wishful thinking will not.
For anyone building or rebuilding a scoring model, the practical starting point is always the same: pull your closed-won data, look at the contacts and companies that converted, and build backward from reality. Everything else is calibration.
If you want to go deeper on how lead scoring connects to the full sales and marketing alignment picture, the Sales Enablement and Alignment hub covers the strategic and operational context that scoring models sit inside. Scoring in isolation rarely moves the needle. Scoring as part of a coherent enablement approach consistently does.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
