Core Web Vitals: What Moves Rankings

Core Web Vitals are a set of performance metrics Google uses to measure real-world user experience on web pages, specifically how fast content loads, how quickly a page responds to interaction, and how stable the layout is during load. They became a confirmed ranking signal in 2021 and have remained a meaningful, if frequently misunderstood, part of technical SEO ever since. Getting them right does not guarantee top rankings, but getting them badly wrong creates a measurable drag on performance that compounds over time.

Key Takeaways

  • Core Web Vitals measure three specific things: load speed (LCP), interactivity (INP), and layout stability (CLS). Each has a defined threshold for passing or failing.
  • Poor Core Web Vitals scores rarely cause sudden ranking drops. They cause slow, compounding underperformance that is easy to misattribute to other factors.
  • Most CWV problems trace back to decisions made in design, development, or hosting, not in the CMS or SEO settings. Fixing them requires cross-functional cooperation.
  • Measuring Core Web Vitals accurately requires field data from real users, not just lab data from tools. Google uses Chrome User Experience Report data, not Lighthouse scores, for ranking.
  • Chasing a perfect score is a distraction. The commercial objective is passing the thresholds, then focusing effort on content and authority where ranking leverage is far higher.

I have sat in enough technical SEO reviews to know how this topic gets handled in most agencies: someone runs a Lighthouse audit, screenshots a score in the 30s, and a mild panic ensues. The client wants to know why their site is “broken.” The dev team wants to know why marketing is suddenly interested in server response times. And everyone ends up arguing about metrics that may or may not reflect what Google actually sees. This article cuts through that noise.

What Are the Three Core Web Vitals Metrics?

Google measures three signals under the Core Web Vitals umbrella. Each one targets a distinct dimension of user experience, and each has a clear pass threshold.

Largest Contentful Paint (LCP) measures loading performance. Specifically, it tracks how long it takes for the largest visible content element on the page to render. That element is usually a hero image, a large block of text, or a video thumbnail. Google’s threshold is 2.5 seconds or faster for a “good” score. Between 2.5 and 4 seconds is “needs improvement.” Anything above 4 seconds is poor.

Interaction to Next Paint (INP) replaced First Input Delay as a Core Web Vital in March 2024. INP measures the time from any user interaction, a click, a tap, a keystroke, to the next frame being painted on screen. It is a more comprehensive measure of responsiveness than its predecessor because it captures all interactions during a session, not just the first one. The threshold for a good INP score is 200 milliseconds or under.

Cumulative Layout Shift (CLS) measures visual stability. It captures how much the page layout shifts unexpectedly during load. If a button moves just before a user clicks it, or an image pushes text down the page after the initial render, that registers as a layout shift. The good threshold is a CLS score of 0.1 or lower. This one is often the easiest to fix and the most immediately noticeable to users.

These three metrics sit within a broader set of Page Experience signals that Google evaluates, which also includes mobile-friendliness, HTTPS, and the absence of intrusive interstitials. Core Web Vitals are the most technically demanding part of that picture, and the most frequently mismanaged.

If you want to see how Core Web Vitals fit into the broader technical and content architecture of SEO, the Complete SEO Strategy hub covers the full picture, from site structure and technical foundations through to content, links, and measurement.

How Do Core Web Vitals Actually Affect Rankings?

Google has been consistent on this: Core Web Vitals are a tiebreaker, not a primary ranking signal. If two pages are roughly equivalent on relevance, content quality, and authority, the one with better page experience signals may rank higher. That framing is important because it stops you from overinvesting in performance optimisation at the expense of content and links, where the ranking leverage is substantially greater.

That said, the tiebreaker framing can cause people to underinvest. In competitive categories where dozens of pages are genuinely close in quality, page experience signals matter more than they would in a category where one site has a dominant authority advantage. I have seen this play out in financial services and e-commerce verticals where the content quality across the top ten results is remarkably similar. In those situations, technical performance becomes a differentiator.

The more commercially significant impact of poor Core Web Vitals is not directly on rankings. It is on conversion rates and paid media efficiency. When I was running agency operations across large retail and financial services accounts, we consistently found that slow load times and unstable layouts degraded conversion rates in ways that dwarfed any direct ranking effect. If you are spending on paid search and your landing pages fail Core Web Vitals thresholds, you are paying to send traffic to a poor experience. The cost is real and measurable.

The Semrush guide to measuring Core Web Vitals covers the available tools and how to interpret the data, which is a useful reference when you are setting up a measurement baseline.

Field Data vs Lab Data: Why the Distinction Matters

This is where most Core Web Vitals conversations go wrong, and it is worth being direct about it. Google does not use Lighthouse scores for ranking. It uses field data from the Chrome User Experience Report (CrUX), which aggregates real performance data from real users on real devices and connections over a rolling 28-day window.

Lab data tools like Lighthouse, PageSpeed Insights (in lab mode), and WebPageTest simulate page loads under controlled conditions. They are useful for diagnosing problems and testing fixes. But they do not represent what Google measures. A page can score 95 on Lighthouse and still fail Core Web Vitals thresholds in the field if the real user population is predominantly on slower mobile connections.

The right way to check your actual Core Web Vitals status is through Google Search Console, under the Core Web Vitals report. That report uses CrUX data. It will show you which URLs are passing or failing, and it groups URLs by similar page templates, which makes it easier to identify systemic issues rather than chasing individual page scores.

PageSpeed Insights also shows CrUX data for individual URLs when sufficient field data exists. If your site does not have enough traffic to generate CrUX data, you will only see lab data, which means you are working with approximations. For low-traffic sites, this is a genuine limitation, and it is worth being honest about that uncertainty rather than treating lab scores as ground truth.

The Most Common Causes of Core Web Vitals Failures

Most Core Web Vitals problems are not mysterious. They trace back to a relatively small set of causes, and the majority of them originate in decisions made during design and development, not in SEO settings.

For LCP, the most common culprits are unoptimised images, render-blocking resources (JavaScript and CSS that prevent the browser from rendering the page), slow server response times, and lazy-loading applied incorrectly to above-the-fold content. If your hero image is a 2MB PNG being loaded through a third-party CDN with no preload hint, your LCP will suffer regardless of how well everything else is configured.

For INP, the dominant cause is heavy JavaScript execution on the main thread. Single-page applications built on frameworks like React or Vue can be particularly susceptible if they are not carefully optimised, because interaction handling competes with other main-thread tasks. Third-party scripts, including tag managers, chat widgets, and ad scripts, are frequent contributors. I have seen INP scores deteriorate significantly after a tag manager was used to deploy a poorly written third-party script, with no one making the connection for weeks.

For CLS, the usual suspects are images and embeds without defined dimensions, dynamically injected content above existing content, and web fonts that cause text to reflow as they load. These are generally the easiest fixes. Setting explicit width and height attributes on images and using font-display: swap or optional for web fonts resolves most CLS issues without significant engineering effort.

The relationship between site architecture decisions and performance is well-documented. Search Engine Land’s analysis of SEO and site architecture captures how foundational structural decisions affect both crawlability and performance, which is relevant context when you are diagnosing systemic CWV issues rather than individual page problems.

How to Diagnose and Prioritise Core Web Vitals Issues

The diagnostic process should follow a consistent sequence. Start with Google Search Console to identify which URL groups are failing and at what scale. A site with 500 failing URLs across product pages has a different problem profile than one with 12 failing URLs on specific blog posts. The scale and pattern of failures tells you whether you are dealing with a systemic issue or an isolated one.

Once you have identified the failing URL groups, use PageSpeed Insights or Chrome DevTools to investigate individual examples from each group. Look at the specific metric that is failing, not just the overall score, and identify the diagnostic information the tool provides. For LCP, it will identify the element and the breakdown of time spent in each phase of the load process. For INP, you may need Chrome’s performance panel to trace specific interactions. For CLS, the layout shift debugger in DevTools shows exactly which elements are shifting and when.

Prioritise fixes based on two factors: the number of URLs affected and the severity of the failure. A CLS issue affecting your entire e-commerce category template is a higher priority than an LCP issue on three blog posts, even if the individual LCP scores look worse. Think about the commercial surface area of the problem, not just the metric value.

When I was leading technical SEO strategy for large retail clients, we used a simple triage framework: fix anything that affects high-traffic, high-conversion templates first, regardless of how complex the fix is. The commercial case for prioritisation is always stronger than the technical case, and it is the only argument that gets engineering resource allocated quickly. Presenting a list of Lighthouse failures to a development team gets you nowhere. Presenting the same information as “these 200 product pages are underperforming and here is the estimated conversion impact” gets you a sprint slot.

The HubSpot guide on web design and SEO covers how design decisions intersect with performance, which is useful context when you are making the case for fixes to stakeholders who think of design and SEO as separate concerns.

Specific Fixes That Move the Metrics

Rather than a generic list of optimisation advice, these are the fixes that consistently move the needle on each metric in real-world implementations.

For LCP: Convert hero images to WebP or AVIF format and compress them appropriately. Add a preload hint in the document head for the LCP element so the browser fetches it early. Remove render-blocking resources by deferring non-critical JavaScript and inlining critical CSS. If your server response time (Time to First Byte) is above 600ms, address hosting or caching before anything else, because no amount of front-end optimisation compensates for a slow server.

For INP: Audit your third-party scripts and remove anything that is not delivering measurable value. Break up long JavaScript tasks into smaller chunks using techniques like setTimeout or the Scheduler API. If you are running a JavaScript-heavy framework, investigate whether server-side rendering or partial hydration can reduce the interaction latency for key user actions.

For CLS: Add explicit width and height attributes to all images and video elements. Reserve space for dynamically loaded content, including ads and embeds, by defining container dimensions before the content loads. Use font-display: optional for web fonts if CLS from font swapping is significant, accepting that some users may see a fallback font instead of the brand font.

If you are evaluating or rebuilding your site on a new platform, the choice of website builder or CMS has a significant bearing on your baseline Core Web Vitals performance. Semrush’s analysis of website builders for SEO includes performance considerations that are relevant if you are at the platform selection stage.

The Organisational Problem Nobody Talks About

Core Web Vitals failures are rarely a technical problem in isolation. They are usually an organisational problem wearing a technical costume. The metrics fail because design, development, and marketing teams made decisions in silos, and nobody owned the performance outcome.

I saw this pattern repeatedly when growing the agency I ran from 20 to 100 people. As the client roster grew and the complexity of engagements increased, technical performance started slipping on accounts where there was no clear owner of the end-to-end experience. The SEO team flagged issues. The dev team said they were not in scope. The client said they had not budgeted for it. And the site sat failing its Core Web Vitals thresholds for months while everyone pointed at each other.

The fix is not technical. It is governance. Someone needs to own page experience as a defined business metric, with a budget to address it and authority to prioritise it against other development work. Without that, you will run the same audit every six months and find the same problems.

This connects to a broader point about measurement and accountability in marketing. If you cannot attribute a business outcome to a piece of work, it tends not to get done. Core Web Vitals improvements are often invisible to stakeholders because the impact shows up in conversion rates and organic traffic over time, not in a dashboard that updates daily. Making that case clearly, with honest approximations of commercial impact rather than false precision, is what gets the work prioritised.

The intersection of SEO and web design decisions is a useful reference for teams trying to establish shared ownership of performance between design and technical stakeholders.

What Good Looks Like in Practice

Passing Core Web Vitals thresholds on the majority of your key commercial pages is the goal. Not a perfect score. Not green across every URL on a site with thousands of pages. The commercial objective is to remove performance as a drag on organic and paid performance for the pages that matter most.

In practice, that means starting with your highest-traffic, highest-commercial-value URL groups: your homepage, your top category pages, your key product or service pages, and your primary landing pages for paid campaigns. Get those passing. Then work through secondary pages systematically.

It also means maintaining what you fix. Core Web Vitals scores degrade over time as new content, scripts, and design changes are added. A site that passes today can fail in six months if there is no process for checking performance before deploying changes. Build a lightweight performance review into your development deployment process. It does not need to be elaborate. A pre-deployment Lighthouse check and a monthly review of Search Console’s Core Web Vitals report is sufficient for most sites.

Judging the Effie Awards gave me a useful lens on this. The campaigns that won were not the ones with the most sophisticated technology or the highest production values. They were the ones where everything worked together in service of a clear outcome. Core Web Vitals are the same. The technical sophistication is irrelevant. What matters is whether the experience works well enough that users stay, engage, and convert.

Core Web Vitals sit within a broader SEO strategy that covers content, authority, technical foundations, and measurement. If you want to see how all of those elements connect, the Complete SEO Strategy hub is the right place to start.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

Do Core Web Vitals directly affect Google rankings?
Yes, but as a tiebreaker rather than a primary signal. Google uses Core Web Vitals as part of its Page Experience signals, which influence rankings when other factors like content quality and authority are roughly equal between competing pages. The more commercially significant impact is often on conversion rates and paid media efficiency rather than rankings directly.
What is the difference between Lighthouse scores and Core Web Vitals?
Lighthouse is a lab-based tool that simulates page loads under controlled conditions. Core Web Vitals, as measured by Google for ranking purposes, use field data from the Chrome User Experience Report, which reflects real user experiences over a rolling 28-day period. A high Lighthouse score does not guarantee passing Core Web Vitals thresholds in the field, and vice versa. Google Search Console shows your actual field data status.
What are the three Core Web Vitals metrics?
The three Core Web Vitals are Largest Contentful Paint (LCP), which measures loading performance with a good threshold of 2.5 seconds or faster; Interaction to Next Paint (INP), which measures responsiveness with a good threshold of 200 milliseconds or under; and Cumulative Layout Shift (CLS), which measures visual stability with a good threshold of 0.1 or lower. INP replaced First Input Delay as a Core Web Vital in March 2024.
How do I check my site’s Core Web Vitals status?
The most accurate source is Google Search Console, under the Core Web Vitals report. This uses real field data from the Chrome User Experience Report and groups URLs by template type, making it easier to identify systemic issues. PageSpeed Insights also shows field data for individual URLs when sufficient data exists. For diagnosing specific issues, Chrome DevTools and the performance panel provide detailed diagnostic information.
Which pages should I prioritise for Core Web Vitals improvements?
Prioritise by commercial impact, not by score severity. Start with your highest-traffic, highest-conversion pages: your homepage, primary category or service pages, key product pages, and landing pages used in paid campaigns. A failing score on a high-traffic product template affects far more users and revenue than a worse score on a low-traffic blog post. Fix the commercial surface area first, then work through secondary pages systematically.

Similar Posts