seojuice

How to Audit Core Web Vitals in 2026 (and What Changed Since INP Replaced FID)

Vadim Kravcenko
Vadim Kravcenko
May 17, 2026 · 12 min read

TL;DR: Core Web Vitals in 2026 are LCP, INP, and CLS. FID was retired and INP took its place on March 12, 2024. The ranking weight is smaller than most SEOs talk about (Google's own docs say relevance beats page experience), but the UX impact is real and the audit work still pays off. This is the audit guide I give clients: three metric thresholds, two-pass tooling (lab plus field), a fix list ranked by payoff, and a quarterly cadence that doesn't burn engineering on the wrong work.

I run quarterly site audits for a portfolio of mid-market sites at mindnow and seojuice.io. Every time, somebody asks me whether they need to push another sprint on Core Web Vitals. The honest answer is usually no. The protocol changed in 2024, the ranking weight is small, and most of what you read about CWV is overstating the search-traffic upside. The audit still matters, because slow pages still bleed conversion and INP is a genuinely different metric than FID. This article is the audit playbook. It is not a how-to-optimize-INP technical post; the web.dev team writes those better than I can. It is the interpretation layer that sits on top.

What changed in Core Web Vitals between 2024 and 2026

The headline shift: First Input Delay (FID) was deprecated and Interaction to Next Paint (INP) became an official Core Web Vital on March 12, 2024. The Chrome team announced it directly:

"INP will officially become a Core Web Vital and replace FID on March 12 of this year." Jeremy Wagner and Rick Viscomi, web.dev blog, March 2024

If your audit template still pulls FID, you are pulling a deprecated metric. Search Console removed FID the same day. PageSpeed Insights surfaces INP. The Web Vitals JavaScript library v4 deprecated the FID measurement. Lab tools that still report FID are useful for triage but not for ranking decisions.

Why the swap. FID measured only the first interaction on a page, and only the input-delay portion of that interaction. A site could have one fast first click and still freeze on every later tap. The metric was also gameable: defer nothing on landing, then load everything on the second interaction. INP closes both gaps by sampling every interaction and counting the full delay through to the next paint:

"INP improves on FID by observing all interactions on a page, beginning from the input delay, to the time it takes to run event handlers, and finally up until the browser has painted the next frame." Jeremy Wagner and Barry Pollard, web.dev, Interaction to Next Paint

The practical effect on audits: most sites that were green on FID are yellow or red on INP. The shift is mechanical, not a sign your site got worse. Plan for the new baseline before you tell stakeholders the score regressed.

Two smaller shifts also landed in the 2024 to 2026 window. CrUX, the field-data source behind Search Console's CWV report, increased its sampling depth, so 75th-percentile thresholds are calculated on more sessions. LCP gained a sub-part diagnostic (TTFB, resource load delay, element render delay) in PageSpeed Insights, which is the single most useful diagnostic add in years.

Timeline of Core Web Vitals changes 2024 to 2026: FID deprecated March 12 2024, INP becomes official, CrUX sampling depth increase, LCP sub-part diagnostic added
The 2024 to 2026 Core Web Vitals timeline. The metric set went from three to three, but the third metric changed shape and most sites' grades moved with it.

The three current metrics and their audit thresholds

Three metrics, one threshold table, applied at the 75th percentile of real-user traffic for mobile and desktop separately. The web.dev canonical page is explicit about the role:

"Core Web Vitals are the subset of Web Vitals that apply to all web pages, should be measured by all site owners, and will be surfaced across all Google tools." Philip Walton, web.dev, Web Vitals

The 75th-percentile cut is the piece most operators miss. You do not need every page session under the threshold. You need three-quarters of them. The slowest tail of devices, networks, and pages can sit in the yellow band and the URL still earns a Good rating in CrUX.

MetricWhat it measuresGoodNeeds improvementPoor
LCP (Largest Contentful Paint)Time to render the largest visible image, text block, or video≤ 2.5 s2.5 to 4.0 s> 4.0 s
INP (Interaction to Next Paint)Slowest interaction-to-paint delay across all interactions on a page≤ 200 ms200 to 500 ms> 500 ms
CLS (Cumulative Layout Shift)Largest burst of unexpected layout shifts during the page lifecycle≤ 0.10.1 to 0.25> 0.25
TTFB (diagnostic only)Server response time≤ 0.8 s0.8 to 1.8 s> 1.8 s
FCP (diagnostic only)First contentful paint of any DOM content≤ 1.8 s1.8 to 3.0 s> 3.0 s

Audit step for each metric. LCP: pull the LCP sub-parts from PageSpeed Insights. If TTFB is over 800 ms, the fix is server or CDN, not front-end. If element render delay dominates, the fix is image preloading or critical CSS. INP: open the page on a mid-range Android phone, interact with every clickable element, watch the Performance panel for long tasks above 50 ms. The slowest interaction is the one that scores. CLS: scroll the page on a slow connection. If layout shifts during font swap or above-the-fold image load, the fix is reserved space (aspect-ratio CSS) or font-display swap with a metric-matched fallback.

Threshold card for the three Core Web Vitals: LCP 2.5 seconds, INP 200 milliseconds, CLS 0.1 with green, yellow, red bands clearly labeled
The 2026 thresholds at the 75th percentile. The numbers have not changed since the INP swap; only which metric occupies the interactivity slot did.

One audit anti-pattern to drop. Do not optimize the Performance score in Lighthouse as a proxy for CWV. The Lighthouse Performance score is a weighted composite of lab metrics that includes Speed Index and Total Blocking Time, neither of which is a Core Web Vital. A site can score 95 on Lighthouse Performance and still fail CWV on field data because Lighthouse simulates one device profile and CWV measures every real visitor.

How Google actually uses CWV in ranking

Read Google's own page-experience documentation. The summary line is blunt:

"Google Search always seeks to show the most relevant content, even if the page experience is sub-par." Google Search Central, Page Experience documentation

That sentence is hidden in the FAQ under "How important is page experience to ranking success?" It is the most important thing Google has written publicly about CWV ranking weight, and most SEO content ignores it. Relevance and authority dominate. Page experience is one of many signals that nudges the ranking when relevance is roughly tied.

The same document hedges its hedge. Google does confirm that "Core Web Vitals are used by our ranking systems," and then immediately clarifies the structure: "There is no single signal. Our core ranking systems look at a variety of signals that align with overall page experience." (Both quotes from the same page-experience documentation.)

How to read all three sentences together. CWV is in the system. It is not weighted heavily. There is no single page-experience score that flips your ranking; it is a cluster of signals that helps Google break ties when other signals are roughly equal. The honest framing for your stakeholders: improving CWV on a page with weak content and weak backlinks will not rescue it. Improving CWV on a page with strong content and strong backlinks that sits at position 4 or 5 can plausibly contribute to a position 2 or 3 outcome over a few months. The math runs through relevance first.

For the broader ranking-signal picture, our piece on ranking signal confidence and audit walks the rest of the system. CWV is one panel of a wider dashboard. Treat it that way.

What AI search engines actually look at

Different protocol entirely. Google AI Mode, ChatGPT browse, Perplexity, and Claude's web tool are retrieval-and-summarization systems. They fetch your page, parse it for relevant content, and quote or paraphrase it back to the user. Page speed in the Core Web Vitals sense does not appear in their selection criteria; their retrieval is server-side and their fetch budget is forgiving compared to a real user.

What they do care about: server responsiveness (a 30-second TTFB will time out the crawler), HTTPS, content that renders without JavaScript (or that the fetcher can execute), structured data they can parse, and clear semantic HTML. Those overlap with CWV at the TTFB edge, but the rest is a different audit. Optimizing INP for ChatGPT browse is wasted work; the agent does not interact with your page.

Practical implication for the 2026 audit. Keep a single TTFB target (under 800 ms) that serves both audits. Decouple INP and CLS work from AI-search work; they live on different priority lists. If your traffic mix is shifting toward AI-referred sessions, the engineering hour is better spent on content-renderability and structured data than on shaving 50 ms off an interaction.

Tools to measure Core Web Vitals in 2026

The toolset has consolidated since 2022. Six tools cover the lab-plus-field workflow most operators need.

ToolData typeWhat it showsWhen to use it
PageSpeed InsightsLab plus fieldLighthouse lab scan plus CrUX field data for the URL and originPer-URL audit, weekly spot-checks, stakeholder reports
Lighthouse (Chrome DevTools or CLI)LabSimulated metric values plus diagnostic opportunitiesPre-deploy regression testing in CI
CrUX Dashboard (BigQuery, Looker Studio)FieldOrigin-level monthly CWV distribution by device and connectionQuarterly trend reports, executive dashboards
Web Vitals JS library (v4)Field (your own RUM)Per-session real-user metrics from your own visitorsContinuous monitoring, release attribution
Search Console CWV reportFieldCrUX data bucketed by URL group, with status changes flaggedMonthly check, regression triage
SEOJuice Lighthouse Score CheckerLabReal Lighthouse scan with shareable reports, trend history, recommendations ranked by impactClient-friendly reports, repeatable audits, team handoffs

Two-pass workflow. Start with PageSpeed Insights for a single URL to get lab plus field side by side. The lab number tells you what is technically achievable on a clean device profile; the field number tells you what your real users experience. When they diverge, the field number is the one that matters for ranking, and the gap is the diagnosis. Lab green plus field red means your user base is on slower hardware or networks than your dev machines.

For repeatable client reports and trend tracking, the SEOJuice Lighthouse Score Checker runs a real page-level scan and gives you a shareable URL plus historical trend data; it is what we use internally to track client progress without manually rerunning Lighthouse on every audit cycle. The broader Lighthouse score interpretation question (what counts as a passing score, how the categories compose) is covered in our Lighthouse SEO score breakdown.

If you want to see how CWV metrics actually correlate with search traffic across a real population of pages, our CWV Impact calculator shows the aggregated correlations from over 164,000 audited pages. The headline finding is consistent with Google's own framing: the correlations are real but moderate, and they vary by metric. CLS shows the weakest correlation; LCP and INP are stronger but still below what most CWV marketing claims.

Common fixes ranked by payoff

Engineering hours are finite. Rank fixes by the size of the impact on your worst metric, not by what is easiest to ship. Below is the priority list I use after seven years of CWV audits.

Tier 1, server response time. If TTFB is over 800 ms, every front-end fix you ship is being measured against a delayed start line. Put a CDN in front of your origin. Cache the HTML response where possible. Move database queries off the critical render path. A 400 ms TTFB improvement frequently moves LCP by 600 ms because the downstream resource loads pull forward in lockstep.

Tier 1, image strategy. The LCP element is usually an image. Preload it with a high-priority hint. Serve responsive sizes via srcset. Use AVIF or WebP with JPEG fallback. Lazy-load every other image with the native loading="lazy" attribute. Do not lazy-load the LCP image itself; that is the most common own-goal in CWV audits.

Tier 2, JavaScript hygiene. Defer non-critical scripts. Split your bundles. Audit third-party tags; most sites have 4 to 6 tag-manager-loaded scripts that have not earned their main-thread time in years. INP regressions almost always trace back to a script that schedules a long task during user interaction. Code-split heavy interactive components, especially anything search, filter, or carousel.

Tier 2, font loading. Use font-display: swap with a metric-matched system fallback so the swap does not cause a layout shift. Preload the primary font file. If you load three web fonts, drop two.

Tier 3, cleanup-pass items. Set explicit width and height on every image and embed. Reserve space for ads with min-height. Move CLS-prone components (notifications, banners, cookie banners) below the fold or render them with translate transforms that do not affect layout.

Three-tier priority list for Core Web Vitals fixes: tier 1 server TTFB and image strategy, tier 2 JavaScript hygiene and font loading, tier 3 explicit dimensions and reserved space
Rank fixes by payoff, not by ease. Tier 1 is where 70 to 80 percent of the wins live, but most sites I audit have been shipping tier 2 and tier 3 work for months without touching the server.

What not to do. Do not chase a Lighthouse Performance score of 100. Do not block a release on a CLS regression of 0.01. Do not let a CWV stakeholder ask for a third audit before the second round of fixes has shipped. The metric is noisy and the report has a lag.

What AI Overviews Get Wrong About CWV

Three patterns recur in AI Overview answers when you query "core web vitals 2026" or similar.

The first is the FID ghost. AI Overviews frequently still list FID as a Core Web Vital. Training data predates March 2024, and the deprecation announcement is not weighted heavily in the index. Fact-check against web.dev or Google Search Central, not the AI summary.

The second is ranking-weight inflation. Most AI summaries reduce Google's hedged language to "Core Web Vitals are a key Google ranking factor." The phrase "key ranking factor" appears in marketing posts that the model has memorized; the actual Google docs say relevance beats page experience. Compression flattens the nuance.

The third is the AI-search self-loop. AI Overviews will recommend optimizing for AI search engines via CWV. As established above, AI search engines do not measure your INP. Page speed in the CWV sense is irrelevant to retrieval. The training set conflates "fast site equals good SEO" without distinguishing the search surface.

Net effect for operators. Treat AI Overview answers on CWV as a starting point, not a source of truth. Verify against first-party Google documentation and web.dev before you act.

The quarterly audit cadence

Four checkpoints a year, ninety minutes each. That is the cadence most mid-market sites need. Anything more frequent is overhead; anything less and regressions land before the next audit catches them.

Pull the Search Console Core Web Vitals report. Note the URL groups that flipped status since last quarter. For each flipped group, run PageSpeed Insights on a representative URL and capture the sub-part diagnostics.

Run a lab scan on your top ten landing pages by traffic. Compare against last quarter. If a page regressed by more than 200 ms on LCP or 50 ms on INP, flag it for engineering review.

Sample the field data from your own RUM (Web Vitals library v4). The CrUX data Google uses lags by 28 days; your own data is real-time. Compare distribution shape, not just averages.

Triage the fix list against the priority tiers above. Ship one tier-one item per quarter, even when it is uncomfortable. Tier-three cleanup rides along with normal release cycles. For sites that already run a quarterly SEO audit, our site audit product automates the lab-scan side of this checkpoint so the manual time goes to interpretation.

Quarterly Core Web Vitals audit cycle: pull Search Console report, lab scan top ten URLs, sample own RUM, triage fix list against priority tiers
The audit cadence I run for clients. Four checkpoints, ninety minutes each, one tier-one fix shipped per quarter. Most teams that try to ship more burn out by month four.

What this audit does NOT solve

Core Web Vitals are a UX-and-light-ranking signal. They are not a content quality signal, not a backlink signal, not a brand-trust signal. If your page is sitting on page two of Google for a competitive query, fixing your CLS will not move you to page one. The relevance and authority work has to happen first.

CWV also does not solve for conversion in the way some operators expect. A 200 ms faster LCP is correlated with better conversion, but the elasticity varies wildly by site type. Ecommerce checkout flows respond strongly; long-form content pages respond weakly. Measure your own conversion lift before you build the engineering case.

And CWV does not solve for the AI-search audit. Different protocol, different fetcher, different priorities. If your traffic mix is shifting toward AI-referred sessions, the page-experience audit is the wrong tool for that question.

FAQ

Is FID still a Core Web Vital? No. FID was deprecated and removed from the Core Web Vitals program on March 12, 2024. Interaction to Next Paint (INP) took its slot. Search Console removed FID from the CWV report the same day. If your audit template still pulls FID, update it.

What is the INP threshold? 200 milliseconds or less is Good. 200 to 500 ms is Needs Improvement. Over 500 ms is Poor. The threshold applies at the 75th percentile of real-user interactions for mobile and desktop separately.

How much does Core Web Vitals affect Google ranking? Less than most SEO content claims. Google's page-experience documentation says directly that Search "always seeks to show the most relevant content, even if the page experience is sub-par." CWV is a real signal but it is one of many, and relevance and authority dominate. Treat it as a tiebreaker between roughly-equivalent pages, not a primary ranking lever.

Do AI search engines like ChatGPT or Google AI Mode use Core Web Vitals? No, not in the same way. Their fetchers retrieve pages server-side and parse content for summarization. Page speed in the CWV sense is irrelevant to retrieval. Server availability (TTFB), content renderability without JS, and structured data are the priorities for AI search; INP and CLS are not.

What is the most common Core Web Vitals audit mistake? Optimizing the Lighthouse Performance score as a proxy for CWV. Lighthouse Performance is a weighted lab composite that includes metrics outside CWV (Speed Index, Total Blocking Time). A page can score 95 on Lighthouse and still fail CWV on field data because Lighthouse simulates one device profile while CWV measures every real visitor.

How often should I audit Core Web Vitals? Quarterly is enough for most mid-market sites. Anything more frequent is overhead; the CrUX data has a 28-day lag and your fix cycle is rarely faster than that anyway. Use continuous RUM monitoring (Web Vitals library v4) for real-time alerting between audits.