LCP, INP, and CLS are Google's three Core Web Vitals in 2026. Here are the thresholds, the real ranking impact, and the specific fixes for each metric.

Core Web Vitals became a Google ranking signal in 2021. In the years since, Google has updated the metric set (replacing FID with INP in 2024), refined the thresholds, and integrated CWV data more deeply into how Search Generative Experience and AI Overviews evaluate page quality. In 2026, a site with poor Core Web Vitals is not just a slow site — it is a site that Google's systems have explicitly flagged as delivering a poor user experience.
This guide covers all three current metrics in detail: what they measure, what the thresholds are, what causes failures, and what to fix. It includes specific code examples for Next.js and general patterns applicable to any stack. At the end, there is a quick-reference table that maps common problems to specific fixes.
LCP measures the time from when a user initiates a page navigation until the largest visible content element in the viewport is fully rendered. The "largest content element" is determined by the browser and is typically one of:
<img> element (including <img> inside <picture>)background-image<video> element's poster imageThe LCP element is not fixed — it can change during the page load as new content appears. The browser reports the final LCP element once loading is complete.
| Score | LCP Value |
|---|---|
| Good | Under 2.5 seconds |
| Needs improvement | 2.5 to 4.0 seconds |
| Poor | Over 4.0 seconds |
Slow server TTFB. The browser cannot start rendering until it receives the first bytes of HTML. A TTFB above 600ms makes a Good LCP score very difficult to achieve regardless of how optimized the rest of the page is. TTFB is affected by server processing time, network distance, and proxy configuration — including Apache buffering settings if you are running a Node.js app behind a reverse proxy.
Unoptimized hero images. Hero images that are not compressed, not converted to modern formats (WebP or AVIF), or not appropriately sized for the viewport are the single most common LCP failure cause. A 2 MB JPEG hero image that could be a 120 KB WebP is an unforced error.
Missing fetchpriority hint. Browsers use resource priority to decide what to fetch first. By default, images below the fold are loaded lazily and images above the fold are loaded at normal priority. Marking the LCP image as high priority tells the browser to fetch it before other resources — this alone can reduce LCP by 200–400ms.
Render-blocking resources. CSS in the <head> blocks rendering entirely until it is downloaded and parsed. JavaScript in the <head> without async or defer does the same. Both delay the point at which the browser can start painting any content.
Add fetchpriority="high" to your hero image:
<!-- Standard HTML -->
<img
src="/hero.webp"
fetchpriority="high"
alt="Hero image description"
width="1200"
height="630"
/>
In Next.js, use the priority prop on above-the-fold images:
import Image from 'next/image'
// The priority prop sets fetchpriority="high" and disables lazy loading
<Image
src="/hero.webp"
alt="Hero image description"
width={1200}
height={630}
priority
/>
Omitting priority on a hero image is one of the most common Next.js LCP issues found in automated audits. The next/image component lazy-loads images by default — which is correct behavior for below-the-fold images but counterproductive for the element that is likely to be the LCP candidate.
Add a preload link for the LCP image:
<link
rel="preload"
as="image"
href="/hero.webp"
fetchpriority="high"
/>
In Next.js App Router, add preload hints via generateMetadata or a custom <head> segment.
Convert images to WebP or AVIF. WebP is 25–35% smaller than JPEG at equivalent visual quality. AVIF is 50% smaller than JPEG. Both formats are supported by all modern browsers. Next.js next/image serves WebP automatically when the browser supports it — but only if you are using the next/image component, not raw <img> tags.
Reduce render-blocking CSS. Inline critical CSS (the styles needed to render above-the-fold content) directly in the <head>. Load the full stylesheet asynchronously using the rel="preload" pattern:
<link rel="preload" href="/styles.css" as="style" onload="this.onload=null;this.rel='stylesheet'">
<noscript><link rel="stylesheet" href="/styles.css"></noscript>
INP (Interaction to Next Paint) replaced First Input Delay (FID) as a Core Web Vital in March 2024. Understanding why it replaced FID helps explain what INP actually measures and why it is harder to pass.
FID measured only the delay before the browser responded to the very first user interaction — the first click, tap, or key press after the page loaded. It captured whether the browser's main thread was busy during that one moment. FID was gameable: a page with a fast first interaction could still freeze for seconds on every subsequent click.
INP measures all interactions throughout the page session and reports the 98th percentile interaction latency. "Interaction" means a complete cycle: user input (click, tap, key press) → browser processes the event → the next frame is painted with the response. At the 98th percentile, it means if you have 100 interactions during a page session, INP is roughly the second-slowest one. You cannot hide a sluggish UI behind a fast first click.
| Score | INP Value |
|---|---|
| Good | Under 200ms |
| Needs improvement | 200 to 500ms |
| Poor | Over 500ms |
Long JavaScript tasks blocking the main thread. The browser's main thread handles both JavaScript execution and rendering. When a JavaScript task runs for more than 50ms, the browser cannot respond to user input during that time. A task that runs for 300ms means a click during that window waits 300ms before the browser can even start processing the click event — before any rendering happens.
Heavy event handlers. An onClick handler that synchronously reads from the DOM, recalculates layout, and updates multiple elements can easily exceed 200ms. The handler runs to completion before the browser paints the response.
Third-party analytics and tag manager scripts. Google Tag Manager, Hotjar, Meta Pixel, and similar scripts execute JavaScript on your pages. If they run during user interactions — or if they install event listeners that fire on every click — they contribute to INP. Many analytics scripts are poorly optimized for INP because they were written before INP existed as a metric.
Unoptimized React/Next.js renders. In React-based applications, a state update triggered by a user interaction re-renders the component tree. If the component tree is large and not memoized, re-renders are expensive. A click that triggers a state update that re-renders 200 components synchronously will produce a high INP.
Break up long tasks with scheduler.yield() or setTimeout:
// Before: one long synchronous task
button.addEventListener('click', () => {
processLargeDataset() // 200ms
updateDOM() // 100ms
sendAnalytics() // 50ms
})
// After: yield to the browser between tasks
button.addEventListener('click', async () => {
processLargeDataset()
await scheduler.yield() // browser can paint here
updateDOM()
await scheduler.yield() // browser can paint again
sendAnalytics()
})
Defer non-critical JavaScript. Scripts that are not needed for the initial interaction (analytics, chat widgets, comment systems) should be loaded with defer or after an idle callback:
// Load non-critical scripts after the browser is idle
if ('requestIdleCallback' in window) {
requestIdleCallback(() => {
const script = document.createElement('script')
script.src = '/non-critical.js'
document.head.appendChild(script)
})
}
In Next.js, use the <Script> component with strategy="lazyOnload" for analytics and third-party scripts that do not affect the initial render:
import Script from 'next/script'
<Script
src="https://example.com/analytics.js"
strategy="lazyOnload"
/>
Use startTransition for expensive React state updates:
import { startTransition } from 'react'
function SearchInput() {
const [query, setQuery] = useState('')
const [results, setResults] = useState([])
const handleInput = (e) => {
// Urgent: update the input immediately
setQuery(e.target.value)
// Non-urgent: defer the expensive results update
startTransition(() => {
setResults(computeSearchResults(e.target.value))
})
}
return <input value={query} onInput={handleInput} />
}
Use requestAnimationFrame for DOM updates that need to be coordinated with rendering:
button.addEventListener('click', () => {
requestAnimationFrame(() => {
// DOM update runs in sync with the browser's render cycle
element.classList.add('active')
})
})
CLS measures visual stability — specifically, how much the visible content of a page unexpectedly shifts position after the initial render. A layout shift happens when an element that is already rendered moves to a new position. The CLS score is the sum of all individual shift scores throughout the page session, where each shift score is calculated as:
layout shift score = impact fraction × distance fraction
The impact fraction is the proportion of the viewport affected by the shift. The distance fraction is the maximum distance any element moved as a fraction of the viewport. A large element that moves a short distance scores lower than a small element that moves far.
| Score | CLS Value |
|---|---|
| Good | Under 0.1 |
| Needs improvement | 0.1 to 0.25 |
| Poor | Over 0.25 |
Images without explicit width and height. When the browser encounters an <img> tag without width and height attributes, it does not know how much space to reserve for the image before it loads. It renders the surrounding text, then when the image loads it pushes everything below it down — a textbook layout shift. This is the most widespread CLS cause across all frameworks.
In Next.js, using next/image without specifying width and height (in non-fill mode) throws a build error — but raw <img> tags in the same codebase have no such protection. Automated audits regularly find <img> tags without dimensions alongside correctly configured next/image components.
Late-loading web fonts causing text reflow. When a web font loads after the browser has already rendered text using a system font fallback, the text shifts as it reflows to the new font metrics. This is especially visible for fonts with very different metrics from the system font fallback (Poppins loaded over Arial fallback, for example).
Dynamically injected content above existing content. Cookie consent banners, notification bars, and sticky headers injected after the initial render push existing content down. Ads loaded into reserved containers do not cause CLS, but ads loaded into containers without reserved height do.
Animations using top, left, margin, or similar properties. CSS animations that change layout properties cause layout shifts. Animations that use transform: translate() instead do not, because transforms do not trigger layout recalculation.
Always specify width and height on images:
<!-- This causes CLS -->
<img src="/product.webp" alt="Product image">
<!-- This does not cause CLS — aspect ratio is calculated from dimensions -->
<img src="/product.webp" alt="Product image" width="800" height="600">
In Next.js, all next/image usage in non-fill mode requires width and height. For responsive images where you do not know the exact dimensions, use the fill prop with a sized parent container:
<div style={{ position: 'relative', aspectRatio: '16/9' }}>
<Image
src="/hero.webp"
alt="Hero image"
fill
style={{ objectFit: 'cover' }}
/>
</div>
Use font-display: swap and a closely matched system font fallback:
@font-face {
font-family: 'Poppins';
src: url('/fonts/poppins-400.woff2') format('woff2');
font-display: swap;
}
font-display: swap tells the browser to render text with the fallback font immediately, then swap to the web font when it loads. This causes a single swap event but avoids the invisible text (FOIT) that can make pages appear broken on slow connections.
To minimize the CLS caused by font-display swap, use size-adjust and other CSS font metric overrides to make the fallback font match the web font metrics as closely as possible:
@font-face {
font-family: 'Poppins-fallback';
src: local('Arial');
size-adjust: 105%;
ascent-override: 96%;
descent-override: 22%;
}
Reserve space for dynamic content:
/* Reserve height for a cookie banner before it loads */
.cookie-banner-placeholder {
min-height: 60px;
}
/* Reserve height for an ad container */
.ad-container {
min-height: 250px;
width: 300px;
}
Use transform instead of layout-affecting properties for animations:
/* This causes layout shifts — avoid */
.slide-in {
animation: slideIn 0.3s ease;
}
@keyframes slideIn {
from { margin-left: -100px; }
to { margin-left: 0; }
}
/* This does not cause layout shifts */
.slide-in {
animation: slideIn 0.3s ease;
}
@keyframes slideIn {
from { transform: translateX(-100px); }
to { transform: translateX(0); }
}
Site owners frequently ask a direct question: do Core Web Vitals actually affect Google rankings, and by how much? The honest answer has two parts.
CWV are a tiebreaker, not a primary signal. Google has been explicit about this since the Page Experience update launched. A page with excellent content, strong E-E-A-T signals, and good backlink profile will outrank a faster competitor with weaker content. The CWV ranking factor is not strong enough to overcome a significant content or authority gap.
The difference between Good and Poor is measurable. Google's own research found that pages meeting CWV thresholds have a 24% lower abandonment rate than pages that do not. Bounce rate affects how Google assesses user satisfaction with a search result, which feeds back into rankings. A site at LCP 5s is not just slower — it has a measurably higher chance that users abandon it before engaging with the content.
The practical threshold to care about: if your LCP is above 4s, your INP is above 500ms, or your CLS is above 0.25, you are in "Poor" territory and the ranking impact is real. Improving from Poor to Needs Improvement to Good is worth the engineering time. Optimizing from Good (LCP 2.3s) to Slightly Better Good (LCP 1.8s) has diminishing returns on ranking impact — focus on content and authority signals instead.
Mobile vs desktop: measure what Google measures. Google uses mobile-first indexing. The CWV measurements that feed into ranking signals come from mobile field data (real users on mobile devices, captured in the Chrome User Experience Report). Most sites have significantly worse CWV on mobile than on desktop because mobile devices have less CPU, slower network connections, and render at lower viewport widths that trigger different layout behavior.
If your PageSpeed Insights desktop score is 90+ but your mobile score is 55, your ranking signal reflects the mobile score. Fix mobile first.
PageSpeed Insights (pagespeed.web.dev) shows both lab data (Lighthouse simulation) and field data (real users from CrUX). Field data requires sufficient traffic to your URL — new pages or low-traffic pages may show "no data available." Use lab data for development; field data is what Google uses for ranking.
Chrome User Experience Report (CrUX) is the dataset Google uses for the Page Experience ranking signal. CrUX collects performance data from real Chrome users who have opted into sharing usage statistics. You can query CrUX data via the CrUX API or view it in PageSpeed Insights. CrUX data is updated monthly and represents the 28-day rolling window of real user measurements.
Google Search Console Core Web Vitals report groups your pages into Good, Needs Improvement, and Poor buckets based on CrUX field data. It is the most actionable view of your CWV status for a whole site because it shows you which URL groups have problems and how many pages are affected. Find it under Experience → Core Web Vitals in Search Console.
Chrome DevTools Performance panel is the best tool for diagnosing specific LCP, INP, and CLS issues at the code level. Record a performance trace during page load or during user interactions to see exactly which JavaScript tasks are blocking rendering and which elements are causing layout shifts.
| Issue | Metric Affected | Fix |
|---|---|---|
Hero image without fetchpriority="high" |
LCP | Add fetchpriority="high" or priority prop in Next.js |
| Hero image is JPEG/PNG, not WebP | LCP | Convert to WebP; use next/image for automatic conversion |
| TTFB above 600ms | LCP | Fix server processing time; check Apache proxy buffering |
Render-blocking CSS in <head> |
LCP | Inline critical CSS; load full stylesheet asynchronously |
| Long JavaScript task on click | INP | Split task with scheduler.yield(); use startTransition |
| Analytics script running on every click | INP | Load with strategy="lazyOnload"; use requestIdleCallback |
| Large unoptimized React re-render | INP | Memoize with React.memo, useMemo, useCallback |
Image without width and height |
CLS | Add explicit dimensions; use fill + sized parent container |
| Web font causing text reflow | CLS | Use font-display: swap; add size-adjust to fallback font |
| Cookie banner injected after render | CLS | Reserve space with min-height before banner loads |
Layout animation using margin or top |
CLS | Use transform: translate() instead |
| Ad container without reserved height | CLS | Set min-height matching ad dimensions on container |
The fastest way to identify which Core Web Vitals issues affect your specific site is a full technical audit — one that measures your actual field data and flags the specific elements causing LCP, INP, and CLS failures.
The free audit at seo.yatna.ai checks all three Core Web Vitals, identifies the specific elements and resources causing failures, and provides prioritized fixes. It takes under two minutes to run and gives you a score across six SEO dimensions — not just performance.
If your LCP failures are related to Apache configuration (TTFB, missing compression, or incorrect cache headers), see Apache compression for Next.js in Docker for the three config changes that fix the infrastructure layer.
About the Author

Ishan Sharma
Head of SEO & AI Search Strategy
Ishan Sharma is Head of SEO & AI Search Strategy at seo.yatna.ai. With over 10 years of technical SEO experience across SaaS, e-commerce, and media brands, he specialises in schema markup, Core Web Vitals, and the emerging discipline of Generative Engine Optimisation (GEO). Ishan has audited over 2,000 websites and writes extensively about how structured data and AI readiness signals determine which sites get cited by ChatGPT, Perplexity, and Claude. He is a contributor to Search Engine Journal and speaks regularly at BrightonSEO.