Lead Scoring Models: Why Most Get It Wrong
A lead scoring model is a system that assigns numerical values to prospects based on their attributes and behaviour, so your sales team knows who to call first. Done well, it shortens sales cycles, reduces wasted effort, and aligns marketing and sales around the same definition of a qualified lead. Done poorly, it creates a false sense of precision that sends your best reps chasing the wrong people.
Most lead scoring models fall into the second category. Not because the concept is flawed, but because the inputs are wrong, the thresholds are arbitrary, and nobody goes back to check whether the scores actually predict revenue.
Key Takeaways
- Most lead scoring models fail because they score for engagement rather than fit, rewarding activity that looks promising but rarely converts.
- Demographic and firmographic fit should anchor your model before any behavioural signals are layered on top.
- Score thresholds should be set by looking backwards at closed-won deals, not forwards from assumptions about what good looks like.
- A lead scoring model is only as useful as the feedback loop between sales and marketing that validates it over time.
- Predictive scoring can sharpen accuracy, but only once you have enough clean historical data to make the model meaningful.
In This Article
- What Is a Lead Scoring Model and Why Does It Matter?
- What Are the Two Core Components of Any Scoring Model?
- How Do You Choose Which Attributes to Score?
- How Should You Set Score Thresholds?
- What Is the Difference Between Rule-Based and Predictive Scoring?
- How Do You Build the Feedback Loop That Keeps the Model Accurate?
- What Are the Most Common Mistakes in Lead Scoring?
- How Does Lead Scoring Connect to the Broader GTM Strategy?
What Is a Lead Scoring Model and Why Does It Matter?
Lead scoring is a prioritisation mechanism. It takes the volume of leads your marketing generates and applies a structured filter so sales can work the highest-value prospects first. In theory, it is one of the most commercially sensible things a B2B marketing team can build. In practice, it is one of the most commonly misused.
I have sat across the table from sales directors who had completely lost faith in the leads coming from marketing. Not because the volume was low, but because the quality was unpredictable. They had no way to distinguish between a CFO who had read three pieces of content and was actively evaluating solutions, and a student who had downloaded a whitepaper for a university project. Both showed up in the CRM with the same lead status. That is a scoring problem.
When lead scoring works, it does two things simultaneously. It focuses sales effort on prospects most likely to convert, and it gives marketing a feedback loop to understand which acquisition channels and content types are actually generating pipeline, not just traffic. Those two outcomes are worth more than almost any campaign optimisation you could run.
If you are thinking about how lead scoring fits into a broader commercial strategy, the Go-To-Market and Growth Strategy hub covers the wider framework that makes individual tactics like this one actually stick.
What Are the Two Core Components of Any Scoring Model?
Every lead scoring model, regardless of how sophisticated it becomes, is built on two foundations: fit and intent.
Fit is about whether the prospect matches your ideal customer profile. This covers firmographic data like company size, industry, revenue, geography, and technology stack, as well as demographic data like job title, seniority, and department. A prospect with perfect fit but no intent is a cold target. A prospect with high intent but poor fit is a time sink. You need both.
Intent is about what the prospect has done. Page visits, content downloads, email opens, webinar attendance, pricing page views, demo requests. These behavioural signals suggest where someone is in their buying process and how actively they are evaluating solutions like yours.
The mistake most teams make is over-weighting intent and under-weighting fit. They build models that reward email opens and page views without first asking whether the person opening those emails is even a plausible customer. I have seen this pattern repeatedly across agencies and client-side teams. The marketing team celebrates a spike in marketing-qualified leads, the sales team calls them, and conversion rates are poor. The model was measuring activity, not potential.
Fit should always be your anchor. Behavioural signals are the multiplier. If someone does not fit your ICP, no amount of content engagement should push them over your MQL threshold.
How Do You Choose Which Attributes to Score?
Start with your closed-won data. Pull the last 50 to 100 deals you closed and look for patterns. What industries were they in? What was the typical company size? What job titles signed the contracts? What content did they engage with before they became opportunities? This is the foundation of an evidence-based scoring model, and it is the step most teams skip.
When I was running an agency and we started taking our own lead generation seriously, we did exactly this exercise. We looked back at every retained client we had won in the previous two years and built a profile from the data. The patterns were clearer than I expected. A specific combination of company size, sector, and the seniority of the first contact predicted conversion far more reliably than any behavioural signal we had been tracking. We had been scoring the wrong things for months.
Once you have your fit attributes mapped, layer in behavioural signals by category. High-intent signals like requesting a demo, visiting a pricing page, or engaging with a case study should carry significantly more weight than passive signals like opening a newsletter or visiting your homepage. The weighting should reflect commercial reality, not assumptions about what looks engaged.
You also need negative scoring. Competitors researching your pricing, students downloading resources, job seekers visiting your careers page. If you can identify signals that indicate a poor fit, deducting points keeps your MQL pool clean. This is often ignored entirely, which is why so many scoring models inflate lead volumes without improving quality.
How Should You Set Score Thresholds?
This is where most models become arbitrary. Teams pick a round number, say 50 or 100, as the MQL threshold, without any empirical basis for that figure. The threshold should be set by looking at the score distribution of your historical closed-won deals and finding the point at which conversion rates become commercially meaningful.
If you apply your scoring model retrospectively to the last 12 months of leads and find that prospects who scored above 65 converted to opportunity at three times the rate of those who scored below 65, then 65 is a defensible threshold. If you cannot make that kind of empirically grounded argument for your threshold, you are guessing.
It is also worth building a tiered system rather than a binary MQL or not-MQL classification. A three-tier model, hot, warm, and nurture, gives sales more context and allows for different follow-up cadences depending on where a lead sits. A hot lead gets a same-day call. A warm lead goes into a structured sequence. A nurture lead stays in marketing automation until their score changes.
Forrester has written about the challenges of building intelligent growth models that align commercial inputs with go-to-market execution, and the core tension they identify, between activity metrics and outcome metrics, sits right at the heart of the threshold problem. You can have a technically sophisticated scoring model and still be measuring the wrong thing if your threshold does not connect back to revenue.
What Is the Difference Between Rule-Based and Predictive Scoring?
Rule-based scoring is manual. You define the attributes, assign the points, set the thresholds, and the model runs on the logic you have built. It is transparent, explainable, and relatively easy to audit. When a sales rep asks why a particular lead scored highly, you can show them exactly which signals contributed to the score. That transparency matters for sales and marketing alignment.
Predictive scoring uses machine learning to identify patterns in historical data and assign scores based on statistical likelihood of conversion. It can surface non-obvious signals that a rule-based model would miss, and it can update automatically as new data comes in. The trade-off is opacity. Predictive models are harder to explain, harder to audit, and only as good as the data they are trained on.
My view is that most B2B teams should start with rule-based scoring and only move to predictive models once they have enough clean historical data to make the predictions meaningful. If you have fewer than 500 closed deals in your CRM, a predictive model is not going to give you reliable output. You are better off building a well-constructed rule-based model and maintaining it properly.
The GTM intelligence gap is real. Vidyard’s research on revenue potential for GTM teams points to significant untapped pipeline sitting in existing data that teams are not surfacing effectively, which is exactly the problem predictive scoring is designed to solve, but only when the underlying data is clean enough to learn from.
How Do You Build the Feedback Loop That Keeps the Model Accurate?
A lead scoring model without a feedback loop is a hypothesis that never gets tested. You build it, you launch it, and then you assume it is working because nobody has told you otherwise. That is not a system. That is wishful thinking.
The feedback loop requires two things: regular data review and a structured conversation between sales and marketing. On the data side, you should be reviewing your model’s predictive accuracy at least quarterly. Look at the conversion rates of leads at each score tier. Are hot leads converting to opportunity at the rate you expected? Are warm leads moving through the pipeline at a meaningful pace? If the numbers are not matching your assumptions, the model needs adjusting.
On the human side, sales reps are your best source of qualitative signal. They know which leads felt genuinely ready to buy and which ones were tyre-kickers who had scored well because they had downloaded a lot of content. Building a simple mechanism for sales to flag lead quality, even just a thumbs up or thumbs down in the CRM, gives you the data to refine your model over time.
I spent a good part of my agency career fixing the relationship between sales and marketing teams that had stopped talking to each other. In almost every case, the breakdown had started with a lead quality dispute that was never properly resolved. Marketing thought sales was being too selective. Sales thought marketing was sending over unqualified contacts and calling them leads. Both were partly right. A functioning lead scoring model, with a feedback loop, is one of the most practical tools for resolving that tension, because it forces both sides to agree on what a good lead looks like before the argument starts.
BCG’s work on aligning marketing and commercial functions makes the case that structural misalignment between teams is often more damaging than any tactical shortcoming, and lead scoring is one of the few operational tools that directly addresses that structural problem.
What Are the Most Common Mistakes in Lead Scoring?
The first is building the model without sales input. Marketing teams frequently design scoring models in isolation, based on their assumptions about what good looks like, and then wonder why sales does not trust the output. If your sales team did not help define the model, they have no reason to believe in it.
The second is ignoring score decay. A prospect who visited your pricing page six months ago and has not engaged since is not the same prospect as someone who visited it last week. Without score decay, your model accumulates historical signals that no longer reflect current intent, and your MQL pool becomes polluted with leads that looked good once but have gone cold.
The third is treating the model as finished. Lead scoring is not a one-time build. Your ICP evolves, your product changes, your market shifts. A model built on two-year-old data is measuring a business that no longer exists. Quarterly reviews are a minimum. If you are in a fast-moving market, monthly recalibration may be necessary.
The fourth is conflating lead scoring with account scoring. In B2B, you are rarely selling to an individual. You are selling to a buying committee within an organisation. A single contact with a high individual score is less valuable than three contacts at the same account all showing moderate engagement. Account-based scoring, which aggregates signals across all contacts at a target account, is a more commercially accurate model for complex B2B sales, and it is worth building in parallel once your basic lead scoring is functioning.
Forrester’s analysis of go-to-market struggles in complex B2B categories highlights how the buying process in high-consideration sectors involves multiple stakeholders with different information needs, which is precisely why individual lead scoring alone is insufficient for enterprise sales.
How Does Lead Scoring Connect to the Broader GTM Strategy?
Lead scoring is not a standalone tactic. It is an operational layer that sits inside a broader go-to-market strategy and only delivers value when the surrounding infrastructure is working. If your ICP is poorly defined, your scoring model will be built on a shaky foundation. If your CRM data is dirty, the signals feeding your model will be unreliable. If your sales and marketing teams are not aligned on pipeline definitions, the model will be ignored regardless of how well it is built.
This is why I am sceptical of teams that treat lead scoring as a quick win. It can be, if the foundations are in place. But if you are building a scoring model to compensate for a poorly defined ICP or a broken handoff process between marketing and sales, you are papering over a structural problem with a tactical solution.
Get the strategy right first. Define your ICP with rigour. Align on pipeline stages and conversion definitions. Clean your CRM data. Then build the scoring model as the operational mechanism that makes those strategic decisions actionable at scale.
There is more on how to build the strategic layer that makes individual tactics like lead scoring actually work on the Go-To-Market and Growth Strategy hub, which covers everything from ICP definition to channel strategy and pipeline architecture.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
