Detect whether a website was built with an AI builder like Lovable, v0, or Bolt — and what the next steps are.
"Vibe coding" is the now-standard nickname for using a generative AI tool — Lovable, v0, Bolt, Webstudio, Make.com, Replit Agent — to scaffold an entire website by prompt. The output looks right at first glance and ships fast, which is exactly why the pattern exploded in 2024–2025. The catch is what those tools leave behind: default class names that nobody renamed, meta tags that still say "Lovable Project", placeholder hero copy that nobody rewrote, missing canonicals, and structured data that's either absent or wrong. Those are the markers a trained eye spots in five seconds. The detector codifies that eye into a deterministic checklist.
We grade four independent signal categories. Builder fingerprints (40% weight) looks for the explicit tells — a Lovable badge, a v0 generator meta, a Bolt deployment header, Webstudio's default site template. Code hygiene (25%) looks at how much the scaffold has actually been customised: classes that still match the AI-builder default vocabulary, untouched component skeletons, default Tailwind config. Content patterns (20%) catches the prose-level signals — heavy adverb usage, the "Concise. Direct. Powerful." colon-list structure, generic feature triplets, placeholder lorem-style copy. SEO basics (15%) checks the things every human-shipped site fixes early but AI scaffolds rarely do — page-specific titles, canonical, robots, OG tags, structured data.
The verdict is one of four words: YES (high-confidence AI build), MAYBE (some markers, not enough to be definitive), NO (clearly hand-built or thoroughly customised), and INCONCLUSIVE (couldn't gather enough signal — typically a 4xx, a JS-only render, or an aggressive bot-block). The verdict alone is the headline; the per-category breakdown and evidence rows are where you go to trust or contest it.
We score four signal categories, each weighted by how reliably it indicates an AI-builder origin.
| Category | Weight | What we check |
|---|---|---|
| Builder fingerprints | 40% | Lovable, v0, Bolt, Webstudio default markers, badges, and meta hints. |
| Code hygiene | 25% | Generic class names, unmodified template scaffolding, default Tailwind config. |
| Content patterns | 20% | Adverb inflation, colon-list pattern, generic feature copy, placeholder text. |
| SEO basics | 15% | Default titles, missing canonical, missing structured data, generic meta. |
Three patterns where a vibe-coded verdict actually matters.
"You quoted me $30k for a custom build — let's see what you actually shipped." A high-confidence YES on the agency's own portfolio is a hard conversation worth having early.
A pre-seed pitch deck claims "in-house engineering team". The marketing site reads as YES, high confidence. That's a conversation, not a deal-breaker, but you want to have it.
A vibe-coded site is going to need real work before it can rank. Knowing that before the discovery call lets you scope the proposal honestly — and avoid promising results an AI scaffold can't deliver.
Each scan generates four panels. Here's what each one is for.
Verdict + confidence
YES / MAYBE / NO / INCONCLUSIVE plus a 0–100 confidence score. Confidence reflects how many independent signals agreed.
Per-category breakdown
Sub-scores for each of the four signal categories. A site can land at MAYBE because builder fingerprints are high but content patterns are clean — the breakdown tells you which.
Evidence rows
Each individual check that contributed to the verdict, with the actual snippet (HTML / meta / class name) it matched on. Open this when you want to verify a claim or contest it.
Detected builder (when found)
If a fingerprint is unambiguous (e.g., a Lovable site badge, a v0 generator meta), we name the specific tool. If we can only narrow it to "an AI builder", we say so honestly.
We fetch the homepage and a sample of internal pages, then run a battery of deterministic checks. Each check returns a binary signal (matched / didn't match) plus the evidence snippet. Within a category, the category score is the percentage of weighted signals that matched. The four category scores combine into the overall confidence using the weights shown in the table above.
Verdict mapping: YES (overall ≥ 70), MAYBE (40–69), NO (≤ 39 with at least 5 signals checked), INCONCLUSIVE (fewer than 5 signals could be evaluated, or the site returned a non-200 on the homepage).
We avoid signals that fire on legitimate hand-built sites (e.g., "uses Tailwind") because half the modern web uses them. The signals we keep are ones we've found, in practice, are extremely unlikely to appear on a site where a human took the time to clean up after the scaffold.
Builder coverage as of this scan: Lovable, v0, Bolt, Webstudio, Replit Agent, Cursor, GitHub Copilot Workspace, Make.com, Framer Sites, Wix Studio, Webflow templates, Notion sites, Carrd, plus 8 lesser-known scaffolders. New builders get added as we encounter distinguishable fingerprints.
Three reference outcomes for interpreting a verdict.
NO · low overall
Hand-built or fully customised
No builder fingerprints, custom class vocabulary, page-specific copy on every URL, full SEO basics. The site reads as crafted by a human or by a team that took the AI scaffold seriously and finished the job.
MAYBE · 40–69
Mixed origins
Some pages are hand-touched, others are clearly scaffolded. Common in mid-stage startups that prototyped with AI and shipped fast. The breakdown tells you which axis still looks generated.
YES · 70+
Clearly AI-built, lightly touched
Multiple high-weight fingerprints, generic copy, default scaffold structure, missing SEO basics. Not necessarily bad — but if the site is supposed to convert traffic or rank, the work hasn't started yet.
It's a heuristic. Confidence comes from how many independent signals agree. A high-confidence YES is reliable; a MAYBE means we found some markers but not enough to be sure.
Probably not. The scanner reads what's actually shipped. Replace default copy, swap the palette, write real meta tags, remove builder badges, and add structured data — re-scan after your changes go live.
Yes. Email vadim@seojuice.io with the domain and we'll delete the public report within one business day.
Agent-Ready audits whether AI agents can discover and consume your site. Vibe-Coded asks the upstream question: did a human ship this, or did an AI builder generate most of it? Different layer, different question.
No, by itself. AI scaffolding is a perfectly fine starting point — it ships a basic site fast. What matters is whether the team finished the job: customised the design, wrote real copy, added SEO basics, removed default markers. A YES verdict on a site that's supposed to be a polished marketing surface is the actual signal.
Three usual causes: (1) the homepage returned a 4xx or 5xx, (2) the page is JS-only and our scanner couldn't get rendered HTML, (3) the page is bot-blocked behind a Cloudflare challenge. Try a different URL on the same domain or contact us if you think the scanner is misjudging.
Yes. Open the report, click the evidence rows for the signals that fired, and look at the actual matched snippets. If you think a snippet is a false positive (e.g., a class name that legitimately exists outside any AI builder), email vadim@seojuice.io with the report URL — we tune the rules regularly based on dispute reports.
Builder fingerprints (40% of the score) are language-independent — they look at code, headers, and class names. Content-pattern signals (20%) are currently English-tuned, so non-English sites get partial credit there. Plan to add multilingual content patterns later this year.
Once, after the changes are live in production. The scanner reads what's shipped — if your fix is in staging or hasn't been deployed, it won't show up. If you re-scan and the verdict didn't move, the report's evidence rows tell you which signals are still firing.