Marketing Qualified Leads: Why Your MQL Definition Is Costing You Pipeline
A marketing qualified lead is a prospect who has shown enough behavioural or demographic signals to suggest genuine purchase intent, enough that marketing considers them worth passing to sales. The definition sounds simple. In practice, it is one of the most contested, most misused, and most commercially consequential decisions a go-to-market team makes.
Get the MQL threshold right and you create a productive handoff that sales trusts. Get it wrong and you either flood the pipeline with noise or starve it of volume. Both outcomes damage revenue. Both are more common than most marketing leaders would admit.
Key Takeaways
- An MQL definition that sales doesn’t trust is functionally worthless, regardless of how sophisticated your scoring model is.
- Most MQL problems are not data problems. They are alignment problems between marketing and sales on what “ready” actually means commercially.
- Lead scoring based on engagement alone overweights top-of-funnel activity and systematically undervalues fit signals like company size, role, and buying stage.
- The MQL threshold should be calibrated against pipeline conversion rates, not against the volume of leads marketing wants to claim as qualified.
- Revisiting your MQL definition every quarter is not admin overhead. It is one of the highest-ROI conversations a revenue team can have.
In This Article
- What Does Marketing Qualified Lead Actually Mean?
- Why the MQL Threshold Is a Commercial Decision, Not a Marketing One
- The Two Components of a Useful MQL Definition
- How Lead Scoring Models Break Down in Practice
- The Alignment Problem Nobody Wants to Have
- MQLs and the Demand Generation Trap
- What a Well-Calibrated MQL Process Actually Looks Like
- The Effie Lens: Effectiveness Over Activity
What Does Marketing Qualified Lead Actually Mean?
The textbook definition is a lead that marketing has assessed as more likely to become a customer than other leads, based on a set of agreed criteria. But that definition hides the real question, which is: agreed by whom, based on what, and calibrated against which commercial outcome?
In most organisations, the MQL definition was set at some point during a CRM implementation, baked into a scoring model, and then never seriously revisited. The scoring rules reflect whoever was in the room when the system was configured, not necessarily what the business has learned about its buyers since. That is a problem.
I have sat in enough pipeline reviews to know that the word “MQL” means different things to different people in the same room. To marketing, it often means “someone who engaged with our content.” To sales, it often means “someone who is ready to have a commercial conversation.” Those are not the same thing, and the gap between them is where pipeline trust breaks down.
If you are thinking about MQLs as part of a broader go-to-market architecture, the Go-To-Market and Growth Strategy hub covers how lead qualification fits into the wider commercial system, from audience targeting through to revenue attribution.
Why the MQL Threshold Is a Commercial Decision, Not a Marketing One
This is the part most marketing teams get wrong. The MQL threshold feels like a marketing decision because marketing owns the scoring model and the automation. But it is fundamentally a commercial decision because it directly determines what sales spends its time on.
Earlier in my career I spent a lot of time optimising for lower-funnel performance metrics. Volume of leads, cost per lead, MQL conversion rates. The numbers looked good on slides. But when I started interrogating what actually happened downstream, I found that a significant portion of what we were calling “qualified” was people who were going to find us anyway. We were capturing existing intent and calling it pipeline generation. The MQL definition was calibrated to make marketing look productive, not to make sales more effective.
That experience changed how I think about lead qualification entirely. The question is not “how many MQLs did we generate?” The question is “what percentage of our MQLs became pipeline, and what percentage of that pipeline closed?” If you cannot answer both of those questions with reasonable confidence, your MQL definition is not doing its job.
There is a useful analogy here. Think about a clothes shop. Someone who picks something up off the rail and tries it on is far more likely to buy than someone who walks past the window. Your MQL definition should be trying to identify the people who have, in some meaningful sense, picked something up. Not just the people who walked past the window and glanced in.
The Two Components of a Useful MQL Definition
A well-constructed MQL definition has two distinct components: fit and engagement. Most organisations over-index on one and underweight the other.
Fit signals are the demographic and firmographic characteristics that indicate whether a prospect could plausibly become a customer. Company size, industry, job title, geography, technology stack, budget authority. These signals tell you whether this person is in your addressable market at all. A CMO at a 500-person B2B SaaS company might be a perfect fit. A student downloading a whitepaper for a university assignment is not. Without fit signals, you end up with a scoring model that rewards curiosity rather than commercial potential.
Engagement signals are the behavioural indicators that suggest a prospect is actively evaluating a solution. Pages visited, content downloaded, webinars attended, pricing page views, repeat visits within a short window. These signals tell you where someone is in their buying process. High engagement from a poor-fit prospect is interesting noise. High engagement from a strong-fit prospect is a signal worth acting on.
The most common failure mode I see is organisations that have sophisticated engagement tracking but almost no fit scoring. They know that someone downloaded three pieces of content and visited the pricing page twice. They have no idea whether that person works for a company that could ever buy from them. The result is a pipeline full of curious people with no budget or authority, and a sales team that stops trusting the MQL designation entirely.
How Lead Scoring Models Break Down in Practice
Lead scoring is the mechanism most teams use to operationalise their MQL definition. Assign points to behaviours and attributes, set a threshold, and automatically flag anyone who crosses it as an MQL. In theory, it is elegant. In practice, it accumulates problems over time.
The first problem is point inflation. As you add new content assets, new channels, and new touchpoints, the number of ways a prospect can accumulate points increases. Without regular recalibration, the scoring model gradually lowers the effective bar for MQL status, because there are simply more ways to score points. Volume goes up, quality goes down, and sales stops following up.
The second problem is that most scoring models treat all engagement as positive. But a prospect who visits your pricing page once and never returns is not the same as a prospect who visits it three times in a week. A prospect who unsubscribes from your email list after downloading a report is sending a signal too. Negative scoring, or decay scoring that reduces a lead’s score over time if there is no continued engagement, is underused and undervalued.
The third problem is that scoring models are often built around the content and channels that marketing has, not the content and channels that buyers actually use. I have seen organisations where the highest-scoring behaviour was attending a live webinar, not because webinar attendees were the best prospects, but because webinars were easy to track and the team had invested heavily in them. The scoring model reflected the marketing team’s activity, not the buyer’s experience.
Forrester has written extensively about the gap between how organisations think about pipeline maturity and how it actually functions at scale. Their research on agile scaling is worth reading if you are trying to build a qualification framework that can grow with your organisation without collapsing under its own weight.
The Alignment Problem Nobody Wants to Have
The most durable MQL problems are not technical. They are organisational. Marketing and sales have different incentives, different time horizons, and different definitions of success. Marketing is typically measured on lead volume and MQL conversion. Sales is measured on pipeline and closed revenue. Those incentives do not naturally align, and the MQL definition sits exactly at the fault line between them.
When I was running an agency and we were growing the team from around 20 people to significantly larger, one of the things that became clear quickly was that the internal definition of a “good client lead” was completely different depending on who you asked. The new business team wanted volume. The delivery team wanted clients who were a genuine fit for what we could actually execute. The finance team wanted clients with realistic budgets. Nobody had sat in a room and agreed on what a qualified opportunity actually looked like. We were all optimising for different things and calling them the same thing.
The fix was not a better CRM. It was a structured conversation between the people who generate leads, the people who close them, and the people who deliver for them. That conversation produced a shared definition of what a qualified opportunity looked like, and it made every downstream process more efficient because everyone was working from the same starting point.
The same principle applies to MQL definitions in any organisation. The scoring model is the output of the alignment conversation, not a substitute for it. If you have not had that conversation recently, or ever, the model is probably reflecting assumptions that no longer hold.
Understanding how MQL definitions fit into the broader growth architecture is something the Go-To-Market and Growth Strategy section covers in more depth, particularly around how qualification criteria interact with channel strategy and audience segmentation.
MQLs and the Demand Generation Trap
There is a version of MQL management that is essentially a sophisticated form of demand capture dressed up as demand generation. You build a content engine, you score engagement, you pass the highest scorers to sales. But the people who engage most with your content are often the people who were already looking for a solution like yours. You are intercepting existing demand, not creating new demand.
This is not a reason to abandon MQL frameworks. It is a reason to be honest about what they are measuring. If your MQL pipeline is primarily composed of people who found you through branded search, direct traffic, or referral, you are not generating new demand. You are qualifying existing intent. That has value. But it is not a growth strategy on its own.
Genuine growth requires reaching audiences who do not yet know they need you, or who know they have a problem but have not yet considered your category as a solution. Those people will not show up in your MQL pipeline until you have done the upstream work to put your brand in front of them. Semrush’s analysis of market penetration strategies is a useful reference point for thinking about the relationship between audience reach and qualified pipeline volume over time.
The MQL metric, used in isolation, systematically undervalues brand and upper-funnel activity because those investments do not show up as scored leads in your CRM. They show up as a higher conversion rate from MQL to pipeline six months later, which is much harder to attribute and much easier to ignore when you are under pressure to show short-term results.
What a Well-Calibrated MQL Process Actually Looks Like
A well-calibrated MQL process has four characteristics that are worth being specific about.
First, it has a shared definition that sales has signed off on. Not a definition that marketing wrote and presented to sales. A definition that was built jointly, with sales input on what “ready to talk” actually means in practice. This sounds obvious. It is rarely done properly.
Second, it tracks downstream conversion, not just MQL volume. The right question is not how many MQLs you generated. It is what percentage became SQLs, what percentage of SQLs became pipeline, and what percentage of pipeline closed. If you can answer those questions by segment, by channel, and by content type, you have a qualification framework that can actually improve over time.
Third, it is reviewed regularly. Quarterly is a reasonable cadence. The review should include both marketing and sales, and it should look at the data honestly. If MQL-to-SQL conversion is declining, that is a signal that the threshold is set too low. If sales is complaining that they are not getting enough leads, that is a signal that the threshold may be set too high, or that the top-of-funnel volume is insufficient.
Fourth, it accounts for different buyer types. A single MQL threshold applied uniformly across all segments, all company sizes, and all buying stages will always be a compromise. Larger organisations with longer sales cycles and multiple stakeholders behave differently from SMBs. A prospect who has attended a product demo is at a different stage than a prospect who downloaded a thought leadership report. The best MQL frameworks have tiered or segmented thresholds that reflect these differences rather than flattening them.
Growth loops, as Hotjar’s work on product-led growth illustrates, depend on qualifying the right users at the right moment. The same principle applies to MQL frameworks in any demand generation context: timing and fit matter as much as engagement volume.
The Effie Lens: Effectiveness Over Activity
Judging the Effie Awards gave me a particular perspective on marketing effectiveness that I find myself returning to often. The entries that win are not the ones with the most impressive activity metrics. They are the ones that can demonstrate a clear line between marketing investment and commercial outcome. Volume of impressions, number of leads, MQL count, none of these are effectiveness metrics on their own. They are activity metrics. Effectiveness is what happens downstream.
The MQL is an activity metric dressed up as an effectiveness metric. It feels like progress because it is a step closer to revenue than a raw lead. But it is only meaningful if the definition is calibrated against actual commercial outcomes, and if the downstream conversion rates are tracked and used to improve the model over time.
The organisations that use MQLs well treat them as a hypothesis: “we believe this person is ready for a sales conversation, based on these signals.” They then test that hypothesis by tracking what happens next, and they update the model when the hypothesis is wrong. The organisations that use MQLs badly treat them as an output: “we generated 400 MQLs this month, therefore marketing is working.” Those are very different orientations, and they produce very different commercial results.
If you want to understand how MQL frameworks connect to broader growth strategy decisions, including how they interact with channel mix, audience segmentation, and revenue attribution, the Go-To-Market and Growth Strategy hub covers the full picture.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
