Core Web Vitals: The SEO Signal Most Marketers Misread

Core Web Vitals are a set of page experience signals that Google uses as ranking factors, measuring how fast a page loads, how quickly it becomes interactive, and how stable the layout is as content appears. They sit within a broader set of page experience signals, but they carry specific, measurable thresholds that determine whether Google considers your page to be delivering a good user experience or not.

The reason most marketing teams get this wrong is not technical ignorance. It is that they treat Core Web Vitals as a compliance exercise rather than a commercial one. Pass the thresholds, tick the box, move on. That framing costs you more than you realise.

Key Takeaways

  • Core Web Vitals measure Largest Contentful Paint, Interaction to Next Paint, and Cumulative Layout Shift. Each has a specific threshold that separates “good” from “needs improvement” from “poor”.
  • Passing the thresholds is not the goal. The goal is page experience that converts. Vitals are a proxy for that, not a destination in themselves.
  • Most CWV problems originate in decisions made at the design and development stage, not in the SEO audit. Fixing them after launch is always more expensive than building them in correctly.
  • Field data and lab data tell different stories. Google ranks based on field data from the Chrome User Experience Report. Your Lighthouse score is a useful diagnostic, not the final word.
  • The biggest commercial impact from improving Core Web Vitals usually comes from mobile performance on mid-range devices, not desktop scores on a fast connection.

If you want to understand where Core Web Vitals fit within a broader SEO approach, the complete SEO strategy hub covers the full picture, from technical foundations through to competitive positioning and content architecture.

What Are the Three Core Web Vitals and What Do They Actually Measure?

Google has three metrics that make up Core Web Vitals. They have changed over time and will likely continue to evolve, so it is worth understanding what each one is actually trying to capture rather than just memorising the current thresholds.

Largest Contentful Paint measures loading performance. Specifically, it tracks how long it takes for the largest visible element on the page to render within the viewport. That element is usually a hero image, a large block of text, or a video thumbnail. The “good” threshold is 2.5 seconds or under. Between 2.5 and 4 seconds is “needs improvement”. Above 4 seconds is “poor”. What LCP is really asking is: how quickly does the page feel loaded to the person looking at it? Not how quickly does it technically finish loading in the background, but how quickly does something meaningful appear.

Interaction to Next Paint replaced First Input Delay in March 2024. Where FID only measured the delay before the browser could begin processing the first user interaction, INP measures the full response time of all interactions throughout the page visit, then reports the worst one. The “good” threshold is 200 milliseconds or under. This is a more demanding metric than FID because it captures what happens when a user clicks a button mid-scroll or interacts with a dynamic element, not just the first tap on page load. Pages that feel sluggish when you try to use them will struggle here even if they load quickly.

Cumulative Layout Shift measures visual stability. It captures how much the page layout moves unexpectedly as it loads, weighted by the size of the elements shifting and the distance they move. A score of 0.1 or under is “good”. The classic example is a page where you go to click a button and an ad loads above it, pushing the button down the screen. You click the ad instead. That is a CLS problem, and it is one of the more frustrating user experiences on the web. It is also one of the easier ones to fix if you catch it during development.

How Do Core Web Vitals Factor Into Google’s Ranking Algorithm?

Google has been explicit that Core Web Vitals are a ranking signal, but less explicit about how much weight they carry relative to other signals. The honest answer, based on what Google has said publicly and what practitioners have observed, is that they are a tiebreaker more than a primary driver.

If two pages are broadly equivalent on content quality, relevance, and authority, the one with better page experience signals will tend to rank higher. But a page with outstanding content and poor Core Web Vitals will still outrank a technically perfect page with thin, irrelevant content. This is the part that gets misrepresented. You will see tools and agencies selling Core Web Vitals improvements as if they are a direct route to ranking gains. Sometimes they are. Often, they are not, because the limiting factor was never page experience.

I have seen this pattern repeatedly when reviewing SEO programmes for clients. A team spends three months and significant budget getting Lighthouse scores from 60 to 95, and rankings barely move. When we dig into the actual limiting factors, it is almost always content depth, link authority, or intent mismatch. The technical work was not wasted, but it was sequenced wrong. You fix the things that are actually holding you back first.

What Google uses for ranking is field data from the Chrome User Experience Report, known as CrUX. This is real-world data from Chrome users visiting your pages, aggregated over a 28-day rolling window. It is not your Lighthouse score. It is not your PageSpeed Insights lab score. Those are useful diagnostics, but Google is ranking you based on what real users on real devices actually experience. That distinction matters because a page can score well in lab conditions and perform poorly in the field, particularly on mobile devices with slower connections.

Why Field Data and Lab Data Tell Different Stories

Lab data is what you get when a tool like Lighthouse simulates a page load under controlled conditions. It is consistent, reproducible, and useful for identifying specific technical problems. But it does not reflect the variance in how real users experience your pages.

Field data from CrUX captures the distribution of experiences across actual users. It includes people on slow 3G connections, people on mid-range Android phones from three years ago, people in geographies with higher latency, and people with browser extensions that affect page rendering. That distribution is what Google is measuring. A page that loads in 1.8 seconds on a fast desktop connection might have an LCP of 4.5 seconds for a significant portion of its real-world visitors.

This is why the “we scored 95 on PageSpeed Insights” conversation can be misleading. The score is useful context. It is not the same as what Google is using to evaluate your page. Semrush has a useful breakdown of how to measure Core Web Vitals across both lab and field data sources, which is worth reading if you are trying to reconcile the gap between your tool scores and your actual CrUX data.

When I was running performance programmes at scale, one of the discipline problems I saw consistently was teams optimising for the metric they could see rather than the metric that mattered. Lab scores are visible and immediate. CrUX data has a 28-day lag and requires access to Search Console or the CrUX API to interrogate properly. So teams optimised for lab scores, reported them upward, and missed the commercial picture entirely. The metric you can see is not always the metric that counts.

Where Most Core Web Vitals Problems Actually Come From

The majority of Core Web Vitals problems are not mysterious. They have predictable causes that trace back to decisions made during site design and development, often without any SEO input at all.

LCP problems are most commonly caused by unoptimised images, render-blocking resources, slow server response times, or the LCP element being loaded too late in the resource waterfall. A hero image that is not preloaded, not compressed, and served without a CDN will produce a poor LCP score almost every time. The fix is usually straightforward once you have identified which element is the LCP candidate and why it is loading slowly.

INP problems tend to come from heavy JavaScript execution on the main thread. When the browser is busy parsing and executing JavaScript, it cannot respond to user interactions. Long tasks, excessive third-party scripts, and poorly structured event handlers are the usual culprits. This is where the marketing stack can become a liability. Every tag you fire through a tag manager, every chat widget, every personalisation script, every analytics pixel adds to the main thread workload. I have audited pages with 40-plus third-party scripts firing on load. The INP scores were predictably bad, and the irony was that half those scripts were for tools the marketing team had stopped using but never removed.

CLS problems are almost always caused by elements without explicit dimensions. Images without width and height attributes, ads that load into unresized containers, fonts that cause layout shifts when they swap in, and dynamically injected content that pushes existing content down the page. These are fixable at the code level, but they require developers to think about layout stability during build, not as an afterthought.

HubSpot’s overview of Core Web Vitals covers the fundamentals clearly if you need a reference point for briefing developers or explaining the concepts to stakeholders who are new to the topic.

The Mobile Performance Gap That Most Teams Ignore

Desktop Core Web Vitals scores are almost always better than mobile scores. This is expected and logical. Desktop devices have faster processors, better connections, and more memory. But the commercial implication is often ignored: if the majority of your organic traffic arrives on mobile, your desktop scores are largely irrelevant to your ranking performance.

Google uses mobile-first indexing, which means it is evaluating your mobile experience as the primary signal for ranking. Your mobile CrUX data is what matters. And mobile CrUX data is typically measured across a wide range of devices, including mid-range and budget smartphones that are far less capable than the devices your development team uses to test.

The practical implication is that you need to test your pages on representative devices, not on the latest iPhone or a high-end Android flagship. Tools like Chrome DevTools allow you to throttle CPU and network speed to simulate mid-range device performance. What you find when you do this is often sobering. Pages that feel fast on a developer’s MacBook can feel genuinely painful on a three-year-old mid-range phone on a standard mobile connection.

I have sat in enough post-launch reviews to know that “we tested it and it was fine” almost always means “we tested it on our own devices on the office WiFi”. That is not a test. That is a controlled demonstration of best-case conditions. The real test is what your median user experiences, on their median device, on their median connection. That gap between best-case and median is where most page experience problems live.

How to Diagnose Core Web Vitals Problems Systematically

There is a logical sequence to diagnosing Core Web Vitals issues that avoids the common mistake of jumping to solutions before understanding the problem.

Start with Google Search Console. The Core Web Vitals report in Search Console shows you your field data, segmented by mobile and desktop, and groups URLs into “good”, “needs improvement”, and “poor” categories. It also identifies which metric is failing. This gives you a prioritised list of pages to investigate, based on real user data. If you have a large site, focus on your highest-traffic organic landing pages first. That is where the ranking impact and the commercial impact will be greatest.

Then use PageSpeed Insights to diagnose the specific causes on individual URLs. PageSpeed Insights pulls both field data from CrUX (if available for that URL) and lab data from a Lighthouse run. The lab data gives you the diagnostic detail: which specific resources are causing the problem, what the waterfall looks like, and what Google recommends fixing. The field data tells you whether the problem is actually affecting real users at that URL.

For INP specifically, the Chrome DevTools Performance panel is your best diagnostic tool. Record a page interaction, look at the flame chart, and identify long tasks on the main thread. The source of those long tasks, whether first-party JavaScript or third-party scripts, tells you where to focus the fix.

For sites built on managed platforms or website builders, the range of fixes available to you is narrower because you have less control over the underlying code. Semrush’s analysis of website builders and SEO covers how different platforms handle performance, which is useful context if you are evaluating platform options or trying to understand the constraints you are working within.

The Third-Party Script Problem That Marketing Teams Create

Marketing teams are, more often than not, the source of the worst Core Web Vitals problems on their own sites. Not through malice or incompetence, but through the accumulated weight of tools added over time without anyone auditing the total cost.

Every marketing tool that requires a JavaScript snippet adds to page weight and main thread execution time. Analytics platforms, heatmap tools, A/B testing scripts, live chat widgets, retargeting pixels, personalisation engines, form tools, cookie consent managers. Each one feels justified in isolation. Collectively, they can make a page genuinely slow for real users.

The problem compounds over time. Tools get added when someone champions them. They rarely get removed when the use case disappears, the contract lapses, or the team changes. I have seen enterprise sites with active pixels for platforms the company had not used in two years, still firing on every page load, still contributing to poor INP scores, still degrading the experience for every organic visitor.

The discipline required here is a regular audit of everything firing on your pages. Export your tag manager container. List every tag. For each one, identify who owns it, what it does, and whether it is still in use. You will find redundancy. Remove it. The performance gains from removing unused scripts are often larger than the gains from complex technical optimisations, and they are far cheaper to implement.

This connects to a broader point about how marketing teams relate to technical performance. The instinct is to add, rarely to remove. But page performance is a finite resource. Every script you add is a cost paid by every visitor on every page load. That cost is not abstract. It shows up in your field data, in your CrUX scores, and in Google’s assessment of whether your page deserves to rank.

Core Web Vitals and Conversion Rate: The Commercial Case

The ranking argument for Core Web Vitals is real but qualified. The conversion argument is more straightforward and more consistently valuable.

Pages that load slowly, feel unresponsive to interaction, or shift layout unexpectedly convert worse than pages that do not. This is not a controversial claim. It reflects basic human psychology: if a digital experience feels broken or slow, people leave. They do not wait. They do not give you the benefit of the doubt. They go back to the search results and click the next result.

The commercial case for fixing Core Web Vitals is therefore not purely about ranking. It is about what happens after the click. A page that ranks well but converts poorly is generating traffic without generating revenue. Improving the page experience improves both the likelihood of ranking and the likelihood of converting the traffic you already have.

This is the framing I use when making the case for Core Web Vitals investment to commercial stakeholders. Not “Google wants us to do this” but “our current page experience is costing us conversions from traffic we are already paying for, whether through SEO investment or paid media.” That framing lands differently. It connects technical work to business outcomes, which is where the conversation needs to be.

HubSpot’s piece on web design and SEO makes a similar point about the relationship between design decisions and organic performance, which is worth reading alongside your Core Web Vitals work.

How to Build Core Web Vitals Into Your Development Process

The most expensive way to address Core Web Vitals is retroactively, after a site has been built and launched. The cheapest way is to build performance requirements into the development process from the start.

This means setting performance budgets before development begins. A performance budget is a set of constraints on metrics like page weight, JavaScript bundle size, number of third-party requests, and target Core Web Vitals scores. Developers work within those constraints during build, rather than optimising against them after launch.

It also means including performance testing in your QA process. Before any page or template goes live, run it through PageSpeed Insights and check the scores against your budgets. If a new feature or design change degrades performance below your thresholds, it does not ship until the performance issue is resolved. This sounds rigid, but it is far less painful than discovering performance problems in your CrUX data three months after launch.

For teams working with agencies or development partners, performance requirements need to be in the brief and in the contract. “The site should be fast” is not a requirement. “All key landing pages must achieve a LCP of 2.5 seconds or under and a CLS of 0.1 or under in field data within 90 days of launch” is a requirement. Vague briefs produce vague outcomes. Specific requirements produce accountability.

I learned this the hard way on a site relaunch early in my agency career. We had done everything right on content and architecture, but the development brief had no performance specifications. The agency built a beautiful site that loaded in 6 seconds on mobile. Fixing it post-launch cost more than building it correctly would have. The lesson stayed with me: technical requirements belong in the brief, not in the post-launch review.

If you are working through the broader mechanics of how technical decisions affect organic performance, the complete SEO strategy hub covers how technical SEO, content, and authority-building work together as a system rather than as separate workstreams.

What Good Core Web Vitals Work Actually Looks Like in Practice

Good Core Web Vitals work is not a one-time audit. It is an ongoing discipline, because pages change. New features get added. Marketing campaigns add tracking scripts. Design updates change image sizes and layout structures. What was “good” in January can be “needs improvement” by June if no one is monitoring.

The teams that manage Core Web Vitals well treat them like any other operational metric. They have a regular reporting cadence, usually monthly, where someone reviews the Search Console Core Web Vitals report and flags any pages that have moved from “good” to “needs improvement” or worse. They have a process for investigating regressions and tracing them to the change that caused them. And they have clear ownership, someone who is responsible for performance, rather than it being everyone’s responsibility and therefore no one’s.

The monitoring tools available make this easier than it used to be. Search Console provides the field data. PageSpeed Insights provides the diagnostic detail. Moz’s reflection on SEO testing is useful context here, because the same discipline that applies to SEO experiments applies to performance work: you need a baseline, you need to isolate variables, and you need to measure outcomes against that baseline rather than assuming correlation is causation.

The teams that struggle are the ones treating Core Web Vitals as a project with a start and end date. They do an audit, fix the issues, declare victory, and move on. Six months later, performance has degraded again because no one was watching. The work is never finished. That is not a counsel of despair. It is just an accurate description of how technical performance works on a living website.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

Do Core Web Vitals directly affect Google rankings?
Yes, but they function as a tiebreaker rather than a primary ranking driver. Google has confirmed that Core Web Vitals are a ranking signal within its page experience system. In practice, strong content relevance and authority will outweigh poor page experience scores, but where two pages are broadly equivalent, the one with better Core Web Vitals will tend to rank higher. The ranking impact is most visible in competitive niches where multiple pages have similar content quality and link profiles.
What is the difference between Lighthouse scores and Core Web Vitals field data?
Lighthouse scores are generated in a lab environment under controlled conditions. They are useful for diagnosing specific technical problems but do not reflect the variance in real user experiences. Core Web Vitals field data comes from the Chrome User Experience Report, which aggregates real performance data from Chrome users visiting your pages over a 28-day rolling window. Google uses field data for ranking purposes, not Lighthouse scores. A page can score well in Lighthouse and still have poor field data if real users are on slower devices or connections.
What replaced First Input Delay in Core Web Vitals?
Interaction to Next Paint replaced First Input Delay as an official Core Web Vital in March 2024. INP is a more comprehensive measure of page interactivity than FID. Where FID only captured the delay before the browser could begin processing the first user interaction, INP measures the response latency of all interactions throughout the page visit and reports the worst one. The “good” threshold for INP is 200 milliseconds or under.
How do third-party scripts affect Core Web Vitals?
Third-party scripts, including analytics tools, chat widgets, advertising pixels, and personalisation platforms, add to the main thread workload of a page. When the browser is executing JavaScript, it cannot respond to user interactions, which directly degrades Interaction to Next Paint scores. They can also delay the loading of the LCP element if they are render-blocking. Auditing and removing unused third-party scripts is one of the highest-impact, lowest-cost ways to improve Core Web Vitals, particularly INP.
Where should I start if my Core Web Vitals scores are poor?
Start with the Core Web Vitals report in Google Search Console. It shows your field data segmented by mobile and desktop, identifies which metric is failing, and groups URLs by severity. Focus first on your highest-traffic organic landing pages, as that is where ranking and conversion impact will be greatest. Then use PageSpeed Insights to diagnose the specific causes on individual URLs. Fix LCP issues by addressing image optimisation, server response time, and resource loading order. Fix INP by auditing and reducing JavaScript on the main thread. Fix CLS by ensuring all elements have explicit dimensions before content loads.

Similar Posts