TL;DR: AI site builders generate clean code but skip SEO basics. Map your old URLs, set up 301 redirects, preserve meta tags, and verify indexing before and after the switch. Use a phased rollout to limit damage.
AI-generated websites are the new WordPress themes. Fast to set up, easy to break SEO.
I have watched five AI-powered site migrations in the past year. Two went smoothly. Three bled organic traffic -- one lost 47% of its clicks in under a month. The pattern was consistent: the AI output looked great, the code was clean, and nobody checked whether Google could still find, crawl, and rank the pages that were generating revenue.
The technology is not the problem. Tools like Lovable, v0, and Bolt produce functional, well-structured code. The problem is the handoff. AI site builders do not carry over your canonical tags, redirect maps, internal link structure, or EEAT signals. Those invisible elements are what keep your pages ranking. Drop them and you are essentially launching a brand-new site with none of the authority you built.
This playbook is the framework I use for AI-powered migrations. When we follow it, traffic dips stay below 5%. When teams skip steps, the industry average dip hovers around 40%.
Before a single AI-generated paragraph hits production, take a forensic snapshot of the site you are about to change. This is not optional -- it is the control group that lets you prove any traffic lift or catch a nosedive early enough to roll back.
| Data Source | File to Export | Key Columns |
|---|---|---|
| GSC | query-performance.csv |
URL, query, position, clicks, impressions |
| Ahrefs/Semrush | backlinks_export.csv |
URL, referring domains, DR/DA, traffic value |
| Crawler | crawl_all_urls.csv |
URL, status, canonical, inlinks, title, meta |
| PageSpeed API | core_web_vitals.csv |
URL, LCP, INP, CLS, device |
Store in a dated folder. This is your "before" picture.
Traffic tiers:
Conversion roles:
Add traffic_tier and conversion_role columns to your master sheet. A quick pivot table now shows which URLs you cannot afford to botch and which ones can be your low-risk AI testing ground.
(Side note: for one client, 73% of their organic revenue came from just 8 pages. Those 8 pages got zero AI treatment until Waves 1 and 2 proved the process worked on low-value pages. This restraint saved them from what would have been an expensive mistake.)
The worst AI-migration horror stories start with "We pushed 5,000 new pages and cannibalized our own rankings." Map every URL, its intent, and its overlap before the AI starts drafting.
| Signal | Tool / Method | Threshold |
|---|---|---|
| Near-duplicate paragraphs | Screaming Frog > Content > Similarity | Similarity 90%+ |
| Low word count | Sheet formula on word_count column | Under 300 words |
| Zombie pages | GSC + Ahrefs merge | 0 clicks, 0 links over 6 months |
| Decision | Criteria | Action |
|---|---|---|
| Retain as-is | Tier-1 traffic, unique backlinks, strong EEAT | Manual copy-edit only; no AI generation |
| Rewrite (Human-led) | Tier-1/2, EEAT-critical (finance, health) | Human draft with AI assist at most 20%, heavy fact check |
| Replace with AI | Tier-3 traffic, no backlinks, thin content | Full AI draft, 20% human overwrite, QA pass |
| Consolidate and Redirect | Duplicate intent, overlapping pages | Merge into single URL; 301 the weaker pages |
| Delete | Zombie pages, no links, no conversions | Remove; return 410 Gone; submit updated sitemap |
/ai-preview/) that mirrors the live URL hierarchy. Never use a subdomain -- subdomains fracture authority./pricing/ on prod should be /ai-preview/pricing/ on stage. When you flip the switch, you swap roots instead of rewriting links.| Parameter | Recommended Setting | Why |
|---|---|---|
| Model | GPT-4o or Claude 3 Sonnet | Higher reasoning reduces factual slips |
| Temperature | 0.4-0.6 | Varied tone without hallucination spikes |
| Human Overwrite | At least 20% of visible text | Lifts AI-detector entropy and injects expertise |
| Fact-Check Pass | Inline citations to primary sources | Satisfies EEAT; reduces misinformation risk |
| EEAT Citations | 2+ expert quotes or stats per 1,000 words | Boosts trust signals for YMYL queries |
Workflow: generate, run through Grammarly, human overwrite, fact check, detector test -- all inside staging.
AI detector threshold: Run GPTZero or Sapling. Target under 35% "likely AI." Anything higher goes back for heavier human editing.
On-page checklist: H1 contains primary keyword (under 60 chars), meta title and description are unique and optimized, at least 8 contextual internal links with anchor diversity, schema markup validates in Rich Results test.
Only pages hitting all QA gates graduate from staging to live.
| Wave | Page Pool | Selection Logic | Goal | Window |
|---|---|---|---|---|
| Wave 1 | 10% low-value pages | Tier-3 URLs with under 1% of clicks, no backlink equity | Validate rendering, schema, AI-detector scores | 7 days |
| Wave 2 | 10% medium-value pages | Tier-2 informational posts, moderate traffic | Confirm ranking stability on higher-stakes URLs | 14 days |
| Wave 3 | 80% remaining pages | Money pages + remaining inventory | Full migration after Waves 1-2 show under 5% variance | 30-45 days |
Split-test where possible: keep original HTML in a query-parameter variant (?v=control) and direct 10% of traffic there via server-side A/B routing.
Real-time dashboards:
Automated alert thresholds:
Rollback protocol: Re-enable legacy HTML via the control param, flip 302 back to original file, submit URL inspection in GSC to force re-crawl, audit root cause.
(Another aside: I had a client panic when their Wave 1 pages showed a 12% traffic dip after four days. We investigated -- turns out the dip was seasonal, matching the exact same pattern from the previous year. The AI pages were performing identically to the originals. The lesson: always compare against the same period last year, not just last week.)
Not automatically. Google penalizes low-quality or misleading content regardless of who wrote it. The risk is publishing bland, low-entropy AI text that fails EEAT checks. Solution: 20% human overwrite, citations, and AI probability scores under 35%.
Do not. Subdomains split link equity and force Google to re-learn trust signals. Migrate within subfolders and keep URL slugs identical.
Force a fact-check pass: prompt the model to include URLs, then have a human verify each link. Any uncitable claim must be rewritten or removed.
Roll back if the drop exceeds 15% for more than seven consecutive days or if conversions fall more than 10%. Re-enable legacy HTML, inspect logs, and fix before relaunching.
Monthly. Update brand voice, new stats, and detector-avoidance patterns. Stale prompts reintroduce repetition, lowering entropy and risking detection.
no credit card required