Page-based public reports

Lighthouse Score Checker

Run a real page-level Lighthouse scan, keep every completed run, and turn the raw audit into plain-English priorities your team can actually work from.

Real Google Lighthouse audit No signup required Public report URL you can share Score history per URL

What this Lighthouse score actually measures

Lighthouse is the open-source audit engine that powers Chrome DevTools, PageSpeed Insights, and the Core Web Vitals scoring Google publishes for every URL it crawls. When you run a page through Lighthouse, you're running the same audit set Google uses to decide whether your page meets the performance thresholds it weighs into search ranking. The categories — Performance, Accessibility, Best Practices, SEO — each combine dozens of individual audits into a single 0–100 score.

The Performance score is the most consequential. It's a weighted blend of Largest Contentful Paint (when the biggest visible element finishes painting), Cumulative Layout Shift (how much content jumps around as the page loads), Interaction to Next Paint (how responsive the page is when a user clicks or types), First Contentful Paint (when the first text or image appears), Total Blocking Time (how long the main thread is locked up), and Speed Index (visual completeness over time). Three of those — LCP, CLS, INP — are the official Core Web Vitals that ride along with every URL in Google's index.

We run the audit in lab mode on a simulated mobile device with throttled CPU and network. That matches the conditions Google's PageSpeed Insights uses for its lab data and gives you a reproducible baseline you can compare across runs. The numbers will not exactly match the Field-style metrics from real users (Chrome's CrUX dataset), but they tell you whether the page as built is fundamentally fast — which is the part you can actually fix.

How it works

We run Google Lighthouse on the page you submit and group the result into the four categories Google uses for performance grading.

Category Weight What we check
Performance 25% Core Web Vitals: LCP, FID/INP, CLS, TTI, total blocking time.
Accessibility 25% ARIA labels, color contrast, keyboard navigation, semantic HTML.
Best Practices 25% HTTPS, console errors, deprecated APIs, image aspect ratios.
SEO 25% Meta tags, mobile viewport, robots.txt, crawlable links.

Who uses this

A few patterns where the lab-style Lighthouse run earns its keep.

Frontend engineers

Confirm a perf fix actually moved the needle before merging. Use the share URL in the PR description so reviewers can see the same numbers.

SEO consultants

Run a baseline scan on a client's top landing pages, then quantify how much of the recommended-fix list actually got shipped.

Founders & PMs

Sanity-check the marketing site or the most-trafficked product page after a redesign — without spinning up the full Lighthouse CLI locally.

Reading your report

Five panels per scan. Here's what each one is for.

Performance score

The headline 0–100 number. Anything 90+ is "good"; 50–89 is "needs improvement"; under 50 is "poor".

Core Web Vitals

LCP, CLS, INP, FCP, TTFB, TBT — the per-metric values with field-style threshold colors. Where you go to read what's slow.

Top recommended fixes

Click-to-expand accordion ranked by leverage. Each fix shows what's broken, why, and the recommended action — copy the developer brief in one click.

Full audit evidence

Every Lighthouse audit, scoped to the audits we actually ran. Open it when you want to see why a particular metric scored what it scored.

Performance history

After two scans of the same URL, you get a trend chart. Use it to confirm a perf change actually held up over time.

Benchmark

Where the URL sits relative to other completed scans for the same site type. Helpful for "is 72 a good score?" conversations.

Methodology & scoring details

We run the full Lighthouse audit suite (the same one bundled with Chrome DevTools and PageSpeed Insights) on a headless Chromium instance with mobile emulation, simulated mid-tier device CPU throttling, and a 1.6 Mbps / 150 ms RTT slow 4G network. These are the canonical PageSpeed Insights "lab" defaults. We do not modify the scoring weights Google ships with Lighthouse.

Performance score weighting (current Lighthouse 11+): FCP 10%, Speed Index 10%, LCP 25%, TBT 30%, CLS 25%. The single-number score is a weighted geometric mean — a single very-bad metric drags the score harder than the arithmetic mean would suggest.

Lab metrics differ from field metrics. Field data (CrUX, real-user monitoring) measures what your real users see; lab data measures what a controlled simulated device sees. They tend to correlate but won't match. Use lab numbers as the "is the page itself slow?" answer; use field numbers as the "are users actually experiencing slowness?" answer. Both matter.

One audit per URL per minute (rate-limited). The browser instance is destroyed after each run so there's no contamination between scans.

What good looks like

Lighthouse-defined thresholds for the headline performance score.

0–49 · Poor

Page is fundamentally slow

Heavy JS, unoptimized images, render-blocking CSS, slow server response. LCP usually well over 4 seconds. Real users on average mobile networks are bouncing.

50–89 · Needs improvement

Workable, with known wins available

Some Core Web Vitals fail their thresholds. Common causes: unused CSS / JavaScript, oversized images, third-party scripts on the critical path. Most fixes here are mechanical, not architectural.

90–100 · Good

Fast across all Core Web Vitals

LCP under 2.5s, CLS under 0.1, INP under 200ms. The page itself is not what's holding back rankings. Holding 90+ requires perf budget enforcement in CI — it's easy to slip back as features ship.

Frequently asked questions

Does this check a full domain or a single page?

A single page. Lighthouse is page-based, so the public report is tied to the exact normalized URL you submit.

Can I rerun the same page more than once a day?

Yes. Same-day rescans are saved as separate runs so you can compare releases, fixes, and regressions.

What gets saved in the public report?

The latest Lighthouse category scores, Core Web Vitals snapshot, resource sizes, benchmark comparison, and readable action items.

Why is the main headline score Performance instead of a composite?

Lighthouse does not publish one authoritative cross-category composite, so the report keeps Performance as the lead score and shows the other three categories separately.

How is this different from PageSpeed Insights?

PageSpeed Insights runs the same Lighthouse engine and shows both lab data (from a single run) and field data (from real-user CrUX measurements). This tool runs the lab side and persists the result at a public URL with shareable history. For field data, use PSI directly — we'd rather not duplicate Google's strongest signal.

Why does my score change between runs?

Lighthouse scores have inherent variance — typically ±5 points between consecutive runs on the same URL. Causes include CPU contention on the host, network jitter, third-party script timing, and run-to-run differences in browser cache. Don't react to a single 5-point swing; trend over 3+ runs is what's meaningful.

Is this mobile or desktop?

Mobile, with the canonical Lighthouse mobile emulation profile (Moto G4-class device, slow-4G network). Mobile is the right default — Google switched to mobile-first indexing in 2019 and the bulk of search traffic is mobile. If your audience is desktop-only, treat the score as conservative.

Does Google use the Lighthouse score for ranking?

Not the score itself. Google uses Core Web Vitals (LCP, CLS, INP) measured from real users (the field data in CrUX) as a ranking input under the page-experience update. The lab-style Lighthouse score is a strong correlate but isn't directly fed into ranking — it's a fast feedback loop, not the ranking signal.

Can I rerun a scan on the same URL?

Yes — there's a Re-run scan button on the report page. Each run is appended to the history so you can see the trend. Rate limit is 5 reruns per IP per day on free; signed-in accounts get 50.

My third-party scripts are tanking my score. What now?

First, the report's Top fixes accordion will name the offenders. The standard playbook: defer everything that doesn't need to run before LCP; load analytics with async; route ad/marketing pixels through a single tag manager you can budget; lazy-mount any chat / consent widgets after first interaction. If a vendor refuses to support async loading, that's now a vendor problem worth escalating.

Can I get my report removed?

Yes. Email vadim@seojuice.io with the report URL and we'll delete the public report within one business day.

Glossary
LCP — Largest Contentful Paint
Time when the largest visible element (usually the hero image or H1) finishes painting. Good ≤ 2.5s.
CLS — Cumulative Layout Shift
Sum of unexpected layout shifts from page load through user interaction. Good ≤ 0.1.
INP — Interaction to Next Paint
The page's responsiveness to user input, measured at the 75th percentile. Good ≤ 200ms. Replaced FID in 2024.
FCP — First Contentful Paint
Time when the browser renders the first DOM content. Good ≤ 1.8s.
TBT — Total Blocking Time
Total time the main thread was blocked between FCP and Time-to-Interactive. Good ≤ 200ms.
TTFB — Time to First Byte
Time from request start to the first byte of the response from your server / CDN. Good ≤ 800ms.
Speed Index
Average time at which visible parts of the page are displayed. Good ≤ 3.4s.
Lab vs Field
Lab data is measured in a controlled simulated environment; field data is measured from real users (CrUX, RUM).
View all →