Lead Scoring: Stop Guessing Which Leads Are Worth Your Time

Lead scoring is the practice of assigning numerical values to prospects based on their attributes and behaviours, so your sales team knows who to call first and who to leave in the nurture sequence a little longer. Done well, it reduces wasted sales time, improves conversion rates, and creates a shared language between marketing and sales. Done badly, it produces a number that nobody trusts and a process that quietly gets abandoned within six months.

Most lead scoring problems are not technical. They are organisational. The model breaks down because marketing and sales never agreed on what a good lead looks like in the first place, or because the scoring criteria were set up once and never revisited. This article covers what actually works, where most teams go wrong, and how to build a model that holds up under commercial pressure.

Key Takeaways

  • Lead scoring fails most often because of misalignment between sales and marketing, not because of poor technology choices.
  • Behavioural signals (what a lead does) are almost always more predictive than demographic signals (who they are) , but you need both.
  • Your scoring model should be validated against closed-won data, not built from assumptions about what good looks like.
  • Negative scoring is as important as positive scoring. A lead who downloads one whitepaper and never returns is not the same as one who visits your pricing page three times.
  • Lead scoring is not a set-and-forget exercise. Markets shift, buyer behaviour changes, and a model built eighteen months ago may now be pointing your sales team in the wrong direction.

Before getting into the mechanics, it is worth being honest about what lead scoring can and cannot do. It is a prioritisation tool, not a prediction engine. It tells your sales team where to focus attention, not which deals will close. If you go into it expecting algorithmic certainty, you will be disappointed. If you go into it expecting a structured, commercially grounded framework for making better decisions with limited time, it will earn its place in your stack.

Why Most Lead Scoring Models Fail Before They Start

I have seen this pattern more times than I can count. A marketing team builds a lead scoring model, presents it to sales with some confidence, and within a quarter the sales team has stopped looking at the scores entirely. The model is technically still running. The CRM is still updating. But nobody is using it.

The reason is almost always the same: the model was built without sales input, so it scores for things marketing cares about rather than things that predict revenue. A lead who downloaded three content pieces and attended a webinar gets a high score. The sales team calls them and finds out they are a student, a competitor, or a researcher with no budget and no authority. That happens enough times and the score becomes noise.

The fix is not a better algorithm. It is a better conversation. Before you assign a single point value, you need to sit down with your sales team and ask two questions: what do your best customers have in common before they became customers, and what did the leads who wasted your time have in common? The answers to those questions are your scoring criteria. Everything else is implementation.

This is one of the areas covered in the broader sales enablement work we do at The Marketing Juice. Lead scoring sits at the intersection of marketing operations and sales productivity, and it only works when both sides have shaped it.

The Two Dimensions Every Scoring Model Needs

Effective lead scoring models work across two dimensions: fit and engagement. Fit tells you whether a lead matches your ideal customer profile. Engagement tells you whether they are showing genuine purchase intent. You need both, because high fit with low engagement means they are a good prospect but not ready yet, and high engagement with low fit means they are interested but probably not your buyer.

Fit scoring draws on firmographic and demographic data: company size, industry, job title, geography, technology stack, annual revenue. These are the attributes that tell you whether this person could plausibly buy from you. If you sell enterprise software with a minimum contract value of £50,000, a sole trader in a market you do not serve is not a fit regardless of how many pages of your website they have read.

Engagement scoring draws on behavioural data: pages visited, emails opened and clicked, content downloaded, webinar attendance, demo requests, pricing page visits, return visits within a short window. These signals tell you whether someone is actively evaluating a solution like yours. A lead who visits your pricing page three times in a week is telling you something a lead who downloaded a top-of-funnel guide six months ago is not.

The weighting between these two dimensions will vary by business model. In a SaaS sales funnel, engagement signals tend to carry more weight because the buying process is often self-directed and digital. In sectors like manufacturing, fit criteria often dominate because the sales cycle is longer, more relationship-driven, and less visible in your analytics. Neither approach is universally correct. The right weighting comes from your own closed-won data.

How to Set Your Scoring Criteria Without Guessing

The most common mistake in building a scoring model is working from assumptions. Someone in marketing decides that a whitepaper download is worth 10 points and a pricing page visit is worth 20 points, and those numbers get entered into the CRM with no empirical basis. The model looks rigorous because it has numbers in it. It is not.

The right approach is to work backwards from your closed-won data. Pull a sample of your best customers, ideally the ones with the highest lifetime value and shortest sales cycles, and look at what they did before they converted. Which pages did they visit? What content did they engage with? What was their job title? What was the size of their company? The patterns you find in that data are your positive scoring signals.

Then do the same exercise for your worst leads: the ones who consumed a lot of marketing resources and never converted, or converted but churned quickly. What did they have in common? Those patterns are your negative scoring signals, and they matter just as much. A lead who is clearly a student, a competitor doing research, or someone from a geography you do not serve should have points deducted, not just fail to accumulate them.

When I was running agencies, we had a version of this problem in new business. We would get enquiries that looked promising on the surface: a decent-sized company, an interesting brief, a senior contact. But certain signals consistently predicted that the pitch would go nowhere. Unrealistic timelines. Procurement-led processes with no marketing champion. Briefs that had been to four other agencies already. We learned to score those signals negatively and protect our pitch resource accordingly. The same logic applies to lead scoring.

There is also a useful body of thinking on how different sectors approach qualification differently. The lead scoring criteria used in higher education, for example, look quite different from those in B2B technology, because the buyer experience, the decision-making unit, and the conversion event are all different. Sector context shapes which signals matter.

Negative Scoring: The Part Most Teams Skip

Negative scoring deserves its own section because it is consistently underused. Most teams focus on accumulating positive signals and set a threshold score above which a lead gets passed to sales. But without negative scoring, your model has no way to distinguish between a genuine prospect and someone who has racked up points through curiosity, research, or competitive intelligence.

Negative scoring criteria typically fall into two categories. The first is disqualifying attributes: a job title that suggests they are a student or a competitor, a company size that falls outside your addressable market, a geography you do not serve, a free email domain that suggests an individual rather than a business. These should carry significant negative point values, enough to push a lead below your threshold regardless of their engagement score.

The second category is engagement decay. A lead who was highly active three months ago and has since gone cold is not the same as a lead who is active now. Most scoring models do not account for this. They treat a pricing page visit from six months ago the same as one from last week. Time-decay weighting, where older engagement signals gradually lose their point value, produces a much more accurate picture of current intent.

There are also persistent myths in sales enablement around what high engagement actually means. A lead who opens every email is not necessarily a strong prospect. They might just be someone who opens everything. Click-through behaviour, time spent on specific pages, and return visits within a short window are more predictive than open rates alone.

Defining the MQL Threshold: Where Marketing Ends and Sales Begins

The marketing qualified lead threshold is the point at which a lead’s score triggers a handoff to sales. Getting this number right is critical, and it is almost always a negotiation rather than a calculation.

Set the threshold too low and you flood sales with leads that are not ready, which erodes trust in the scoring model and creates friction between teams. Set it too high and you hold back leads that could be converting, which slows pipeline and creates a different kind of friction. The right threshold is the one that maximises the proportion of MQLs that convert to sales qualified leads, and it will shift over time as your model matures.

One approach that works well is to start with a provisional threshold based on your best estimate, then track MQL-to-SQL conversion rates for ninety days and adjust. If fewer than 20% of your MQLs are being accepted by sales, your threshold is too low. If sales are telling you they need more volume, it may be too high. The number itself matters less than the process of measuring and adjusting it.

The commercial benefits of sales enablement only materialise when marketing and sales are genuinely aligned on what constitutes a qualified lead. A scoring model that marketing built in isolation and sales tolerates is not enablement. It is theatre.

Sector-Specific Considerations That Change the Model

Lead scoring is not a universal template. The signals that predict purchase intent in one sector can be meaningless in another, and the weighting between fit and engagement shifts depending on how your buyers actually make decisions.

In B2B technology, particularly SaaS, the buying process tends to be more self-directed. Buyers research independently, compare options online, and often have a strong sense of what they want before they speak to a salesperson. Engagement signals carry significant weight because they are a reasonable proxy for where someone is in their decision process. A prospect who has visited your integration documentation and your pricing page in the same week is telling you something specific.

In sectors with longer, more relationship-driven sales cycles, the picture is different. Sales enablement in manufacturing, for example, often involves multiple stakeholders, extended evaluation periods, and buying decisions that are heavily influenced by existing relationships and supplier track records. Digital engagement signals are less reliable here because a significant portion of the buying process happens offline. Fit criteria, including company size, sector, existing supplier relationships, and procurement cycle timing, tend to be more predictive.

The practical implication is that you should not import a scoring model from a different sector or a different business type and expect it to work. Build from your own data, validate against your own closed-won deals, and be sceptical of benchmarks that were not generated in your context.

The Collateral Connection: What Scoring Tells You About Content Gaps

One underused application of lead scoring data is content strategy. When you can see which pieces of content your highest-scoring leads engage with before they convert, you have a direct signal about what is actually moving buyers through your funnel. When you can see which content your lowest-converting leads engage with, you have a signal about what is attracting the wrong audience.

This kind of analysis often reveals that the content your marketing team is most proud of, the thought leadership pieces, the trend reports, the broad educational content, is attracting a lot of engagement from people who never buy. Meanwhile, the more specific, commercially focused content, the case studies, the comparison pages, the implementation guides, is engaging a smaller audience that converts at a much higher rate.

I spent time as a judge for the Effie Awards, which are specifically about marketing effectiveness rather than creative achievement. One of the things that struck me consistently was how often the work that performed best commercially was not the work that generated the most attention or the most awards talk. Effectiveness and noise are not the same thing. The same principle applies to content: high engagement does not equal high commercial value.

Reviewing your sales enablement collateral through the lens of lead scoring data is one of the more practical ways to connect your content investment to commercial outcomes. Which assets are your best leads actually using? Which are being consumed by people who never convert? Those two questions will reshape your content priorities faster than any editorial planning session.

Maintaining and Improving Your Model Over Time

A lead scoring model is not infrastructure you build once and leave running. Markets shift, buyer behaviour changes, your product evolves, your ideal customer profile narrows or broadens. A model that was accurate eighteen months ago may now be systematically misdirecting your sales team.

The minimum viable maintenance schedule is a quarterly review of MQL-to-SQL conversion rates, broken down by the specific criteria that triggered the score. If certain criteria are consistently producing leads that sales rejects, those criteria need to be reweighted or removed. If sales is consistently finding strong prospects that the model scored low, there is a signal you are missing.

When I was turning around a loss-making agency, one of the disciplines I introduced was a monthly review of new business pipeline quality, not just volume. We had been measuring how many leads we were generating without asking whether those leads were worth generating. Once we started tracking the ratio of pitches to wins, and the average contract value of what we were winning versus what we were losing, the picture changed significantly. We stopped chasing volume and started being selective. The pipeline shrank, but the win rate and the average deal value both improved. Lead scoring is the same discipline applied to a different context.

Predictive lead scoring, where machine learning models identify patterns in your historical data and weight signals automatically, is increasingly accessible through platforms like HubSpot, Salesforce, and Marketo. These tools can surface correlations that manual analysis would miss. But they require sufficient data volume to be reliable, and they still need human judgment about which signals are meaningful versus coincidental. The model is a perspective on your data, not a replacement for commercial thinking. Tools like behavioural analytics platforms can supplement your scoring data by showing how leads actually interact with your site, not just which pages they visit.

Understanding how leads engage with your site at a deeper level, including where they drop off and what draws them back, is also where session-level behavioural data becomes useful. Aggregate page view counts tell you what was visited. Behavioural data tells you what was actually read, scrolled through, and engaged with. That distinction matters when you are trying to weight engagement signals accurately.

The broader landscape of sales enablement strategy, including how lead scoring fits into a wider programme of sales and marketing alignment, is something we cover in depth across The Marketing Juice sales enablement hub. Lead scoring is one component of a larger system, and it works best when the surrounding infrastructure, the content, the handoff process, the feedback loops, is also functioning well.

Common Mistakes Worth Naming Directly

A few patterns come up repeatedly and are worth naming plainly.

Treating all engagement equally. A contact who opens your emails but never clicks is not showing the same level of intent as one who visits your pricing page. Not all engagement signals carry the same weight, and your model should reflect that.

Ignoring the buying committee. In B2B, purchase decisions are rarely made by one person. If multiple contacts from the same company are all accumulating engagement scores, that is a stronger signal than one contact with a high individual score. Account-level scoring, where you aggregate signals across all contacts at a given company, is more appropriate for complex B2B sales than contact-level scoring alone.

Building the model without sales buy-in. This is the most common failure mode, and it is entirely avoidable. Sales need to have shaped the criteria, agreed on the threshold, and committed to providing feedback on lead quality. Without that, the model will drift from commercial reality and eventually be ignored.

Confusing a high score with a hot lead. A score is a prioritisation signal, not a guarantee. A lead with a score of 85 still needs to be qualified in conversation. The score tells your sales team who to call first. It does not tell them what to say or what they will find when they get there.

Not closing the feedback loop. The model only improves if sales are feeding back on lead quality. That means a structured process, not just informal comments, for sales to indicate whether an MQL was accepted, rejected, and why. That data is what allows you to refine the model over time rather than letting it drift.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is lead scoring and how does it work?
Lead scoring is the process of assigning numerical values to prospects based on their attributes and behaviours. Points are added for signals that suggest a good fit or strong purchase intent, such as visiting a pricing page, matching your ideal customer profile, or requesting a demo. Points may be deducted for signals that suggest a poor fit, such as a job title outside your target audience or inactivity over time. When a lead’s score reaches a defined threshold, they are typically passed to sales as a marketing qualified lead.
What is the difference between explicit and implicit lead scoring?
Explicit scoring is based on information a lead has directly provided, such as their job title, company size, or industry, typically through a form. Implicit scoring is based on observed behaviour, such as pages visited, emails clicked, or content downloaded. Most effective models use both: explicit data tells you whether a lead fits your ideal customer profile, while implicit data tells you whether they are actively showing purchase intent.
How do you decide what score threshold to use for MQL handoff?
There is no universal threshold. The right number is the one that maximises your MQL-to-SQL conversion rate, which means you need to set a provisional threshold, track how many MQLs sales accept versus reject over a defined period, and adjust accordingly. If sales are rejecting more than 70-80% of MQLs, your threshold is too low. If they are accepting nearly everything but asking for more volume, it may be too high. Treat the threshold as a variable to optimise, not a fixed number to set once.
How often should you review and update your lead scoring model?
A quarterly review is a reasonable minimum. You should look at MQL-to-SQL conversion rates broken down by the criteria driving scores, check whether the attributes of your closed-won customers still match your scoring criteria, and update point values when patterns shift. If you launch a new product, enter a new market, or change your ideal customer profile significantly, that should trigger an immediate review rather than waiting for the next scheduled one.
Can lead scoring work for small businesses with limited data?
Yes, but the approach needs to be simpler. With limited historical data, you cannot validate a complex multi-criteria model statistically. A more practical approach is to focus on a small number of high-confidence signals: job title match, company size, and one or two high-intent behaviours like a pricing page visit or a demo request. Keep the model simple, involve your sales team in defining what good looks like, and plan to refine it as you accumulate more conversion data.

Similar Posts