Updated April 2026
TL;DR: Google renders JavaScript fine now. The SPA indexing problems you still hit in 2026 aren't rendering — they're hydration cost, loader waterfalls, and meta tags that silently refuse to update on client-side route changes. Fix those three and your single page application SEO stops being a rebuild project.
seojuice.io is mid-migration. The blog, tools, and this article ship as static-first HTML on first byte. The dashboard (the thing you log into after signup) started as plain HTML and we're moving it to React right now. Two rendering strategies under one domain, on purpose. Google indexes what needs indexing. The dashboard doesn't need to rank for "page health scoring UI" and it never will.
I've built client SPAs through my dev agency (mindnow) for six years. React, Vue, a few Svelte builds, one Ember holdout that somehow still exists. The pattern across every audit: the developer blames the framework and the SEO person blames the developer. Neither of them is usually right.
Here's what changed. Googlebot ships with an evergreen Chromium renderer and has since 2019. The old "two-wave indexing" story you read on Moz in 2018 is dead (Martin Splitt keeps saying this in every talk and people keep not believing him). If your content is in the DOM after hydration, Google can see it.
What hasn't changed: rendering JavaScript costs crawl budget. Googlebot defers JS-heavy pages. On a 50-page marketing site that doesn't matter. On a 50,000-URL e-commerce SPA it absolutely does. The single page application seo problem in 2026 is almost always a performance problem wearing a rendering costume.
Not every URL needs to rank. A trading dashboard. A real-time analytics view. A collaborative editor. A SaaS app behind a login wall. These are interactive surfaces, not search targets. The sign-up page needs to rank. The thing you sign into does not.
If your whole app is behind auth, stop reading. Make the marketing site server-rendered and ship the app as whatever SPA stack you like. That's the seojuice.io pattern and it's the pattern for roughly every SaaS I've audited.
The trouble starts when the product marketing page, the pricing page, and the blog all live inside the same React SPA because "we already have React set up." That's a tooling decision made for developer convenience, and it's the one worth revisiting.
Pick per route, not per site. This table is the short version of what I walk clients through on the first audit call.
| Strategy | Best for | Indexing speed | Build complexity |
|---|---|---|---|
| SSR (server-side render per request) | Personalized pages, frequently changing content, auth-aware marketing | Fast. HTML on first byte | Medium. Needs a Node/edge runtime |
| SSG (static site generation) | Blog posts, docs, landing pages, anything that doesn't change hourly | Fastest. Flat HTML | Low. Builds once, serves forever |
| ISR (incremental static regeneration) | Large content sites where a full rebuild takes ages | Fast. Stale-while-revalidate | Medium. Next.js / Astro native, bolt-on elsewhere |
| CSR (client-side render) | Dashboards, apps behind login, interactive tools | Slow, unreliable for crawl | Lowest. Default React/Vue output |
Most teams I talk to pick one strategy for the whole codebase and then fight it for two years. Don't do that.
Next.js App Router lets you set rendering strategy per route with a single export. The marketing pages are SSG. The pricing page is SSR (because it shows a currency based on the visitor's region). The dashboard is CSR. All in the same repo. The Next.js docs on server components lay this out clearly.
Remix made this obvious earlier with its route-level loader model. Astro made it obvious for content sites. The framework discourse caught up to what agencies had been hacking together with Gatsby + a separate React app for years.
At seojuice.io we went a blunter route: the ranking pages stay as static-first HTML, the dashboard moves to React as we rewrite it. One domain. Two render paths. (Side note: I'd probably reach for Next.js on a greenfield build today, but migrating a working stack for the sake of tidiness is a bad use of a month.)
Pick the split that matches your team. A Rails shop can pair Rails views with a React island. A Python team can do exactly what we do. The point isn't the stack. It's that "whole site is a CSR SPA" stopped being the default answer around 2023 and most codebases haven't caught up.
The cleanest single page application seo pattern I know. Server component fetches the data on the edge, ships pre-rendered HTML, hydrates a small client island for the interactive bits.
// app/blog/[slug]/page.tsx
import { getPost } from '@/lib/posts';
import PostBody from './post-body';
import CommentBox from './comment-box';
export async function generateMetadata({ params }) {
const post = await getPost(params.slug);
return { title: post.title, description: post.excerpt };
}
export default async function Post({ params }) {
const post = await getPost(params.slug);
return (
<article>
<h1>{post.title}</h1>
<PostBody blocks={post.blocks} />
<CommentBox postId={post.id} />
</article>
);
}
CommentBox is the only client component on the page. Everything else is HTML on first byte. Googlebot sees content before JS runs. Users see content before JS runs. That's the whole trick.
Remix's model is older and in some ways clearer; data fetching is explicit per route.
// app/routes/blog.$slug.tsx
import { json } from '@remix-run/node';
import { useLoaderData } from '@remix-run/react';
import { getPost } from '~/models/post.server';
import PostBody from '~/components/post-body';
export const loader = async ({ params }) => {
const post = await getPost(params.slug);
if (!post) throw new Response('Not found', { status: 404 });
return json({ post });
};
export const meta = ({ data }) => [
{ title: data.post.title },
{ name: 'description', content: data.post.excerpt },
];
export default function Post() {
const { post } = useLoaderData<typeof loader>();
return <PostBody blocks={post.blocks} />;
}
The loader runs on the server. The meta export handles title and description. No client-side head manipulation, no race conditions. The Remix data loading docs cover edge cases like nested layouts.
Three different products, three different jobs. I'll be honest about where each one sits because the marketing on this category is terrible.
Prerender.io is a bot-detection proxy. It takes your client-rendered page, runs it through a headless browser, caches the resulting HTML, and serves that HTML to search bots. Useful when you can't change the app. Limited when you can. Ongoing cost scales with cache size and refresh frequency.
Rendertron is Google's open-source version of the same pattern, now archived but still deployed widely. You self-host it. Cheaper in infrastructure, more work in operations. Most teams that pick this regret it by month six.
Worth saying plainly: if you're on Next.js with SSR or SSG, you don't need Prerender.io or Rendertron. I have clients running high-traffic blogs on Next.js without any prerender proxy and their pages index fine. The proxy category exists for apps that can't change their rendering strategy cheaply (an Angular SPA you inherited, a CRA build with six years of hydration logic nobody wants to touch). If your framework already ships HTML on first byte, skip this layer.
SEOJuice's JS snippet (us) is a different layer. It doesn't render your SPA. Your framework or your prerender proxy does that. What our snippet does is maintain on-page SEO elements on pages that have already been rendered: internal links, missing meta tags, alt text, schema. Think of it as a continuous-optimization layer on top of whatever rendering strategy you picked.
Three layers stack cleanly: your framework renders the HTML, a prerender proxy handles edge cases if needed, SEOJuice keeps the on-page details maintained. You don't pick one — you pick the ones you need. (I should be upfront: obviously we'd like you to use ours. It also genuinely doesn't replace a renderer.)
If this is your problem, run our AI crawler inspector to see exactly what Googlebot, GPTBot, and PerplexityBot are fetching from your SPA right now.
This is where 2026 gets interesting and where most SPA SEO writeups still haven't caught up.
Googlebot renders JavaScript. GPTBot, the crawler behind ChatGPT, does not execute JavaScript at all. It reads the raw HTML response and walks away. If your page is a shell that hydrates into content, GPTBot sees a shell. You will not be cited.
PerplexityBot (user agent PerplexityBot/1.0) is similar: it crawls raw HTML and uses that for real-time answers. Google AI Overviews pull from Google's standard index, so if Googlebot can render your page, AI Overviews can use it. But AI Overviews also seem to prefer pages with clear entity markup and direct-answer paragraphs in the first 150 words. (Side note: I hit this once and assumed it was a cache issue. It wasn't. The page literally had no text in the server response.)
The practical implication: if you want to be cited by AI search, server-rendering isn't optional for the content pages anymore. Not for ranking — for citation. An SPA that relies on client hydration loses the entire AI search channel. That's a new cost that didn't exist two years ago and it's why the mixed-architecture approach is winning.
1. Hydration cost blocking LCP. You ship a server-rendered page, it looks great in view-source, and then hydration loads 400KB of JavaScript that blocks the main thread for 2.8 seconds on a mid-range Android. Your LCP craters. Google's Core Web Vitals penalty kicks in. Fix: React Server Components, selective hydration, or Astro-style islands. Measure in WebPageTest on the Moto G Power profile, not on your M3 MacBook.
2. Loader waterfalls slowing TTFB. Your server component fetches the post. The post fetches the author. The author fetches the org. Each await blocks the next. The page ships in 1,400ms when it should ship in 400ms. Fix: fetch in parallel with Promise.all, or cache aggressively at the edge. Remix's nested loaders accidentally encourage waterfalls. Watch for it.
3. Meta tags not syncing on client-side navigation. This one still bites senior developers. You land on /blog/post-a, meta title is correct. User clicks a link, React Router pushes to /blog/post-b, the URL changes, the content changes, the meta title does not. Social shares are wrong. AI crawlers (which don't run JS anyway) get the first page's metadata forever. Fix: use your framework's meta exports (generateMetadata in Next, meta in Remix, useHead in Nuxt). Don't hand-manipulate document.title.
I called this "three mistakes" and I'm aware there are more. But these three account for roughly every SPA SEO audit I've run in the last eighteen months. The rest is long-tail.
Four tools, in order of how often I actually open them:
curl -A "GPTBot/1.0" https://yoursite.com/page shows what AI crawlers see. If the response is an empty shell, you have an AI-channel problem.GSC's Coverage report will tell you if pages are "Discovered - currently not indexed" which is Google's way of saying "I found this but it wasn't worth my crawl budget." On an SPA that usually means the content failed to render or the page was too slow to justify re-crawling. Check URL Inspection on five affected URLs before you do anything else.
No. React renders fine. What's bad for SEO is a React app that only produces HTML after client-side hydration. Use Next.js, Remix, or Astro to server-render the content pages. The dashboard behind login can stay client-only.
Not specifically Next.js, but you need something that produces HTML at request time or build time. Remix, Astro, Gatsby, or Vite + a prerender proxy all work. Plain Create React App (client-only) will struggle to rank and won't be cited by AI search engines at all.
Not meaningfully. Google's rendering pipeline uses an evergreen Chromium and renders pages within the same crawl flow. The "two-wave" framing was accurate around 2018 and Martin Splitt has been correcting it publicly since 2019.
Run curl -A "GPTBot/1.0" https://yoursite.com/ and read the response. GPTBot doesn't run JavaScript. Whatever's in the raw HTML is what ChatGPT ingests. Our AI crawler inspector does this for you across Googlebot, GPTBot, ClaudeBot, and PerplexityBot.
It's valid but suboptimal. Prerender.io and Rendertron work today and Google accepts the pattern. The reason to move off them is cost. Prerender caches get stale, AI bots don't trigger the proxy reliably, and server-rendering in your framework is usually cheaper once you've set it up.
Related reading:
Love this deep dive on SPAs and client-side rendering — the React/Vue/Angular SEO pitfalls were explained really well! 🙌 I migrated a React SPA to hybrid SSR + prerendering and saw a crawl/index uptick in under a week. Please do a tutorial on hydration and sitemap strategies next 🙏
Nice — glad it helped and awesome you saw a quick uptick! I did the same move from CRA SPA → hybrid SSR + prerendering last year and saw similar gains, fwiw.
A few practical tips from that migration that might help for the tutorial you asked for:
- Hydration pitfalls: mismatches usually come from non-deterministic things in render (Date.now(), Math.random(), generated IDs, or useEffect producing visible DOM changes). Fix by moving client-only stuff into useEffect or guarding it (if (typeof window === 'undefined') ...), or use deterministic id libs.
- Streaming/partial hydration: if you’re using React 18/Next, streaming SSR + selective client hydration (islands-ish or client boundary components) reduces TTI without sacrificing SEO — imo worth covering.
- Debugging: curl or fetch the page server-side and compare to what Chrome renders after hydration; React devtools console will show hydration mismatch warnings. Also check Search Console’s “Inspect URL” to see what Googlebot sees.
- Sitemap strategy: generate sitemaps at build for static routes, dynamically for API-driven content (rebuild or incremental), split into sitemap index if >50k URLs, include lastmod, and reference it from robots.txt. For multi-lingual sites include hreflang entries or separate sitemaps per locale.
- Tools I used: Next.js (SSR + static props), next-sitemap for generation, prerender.io for tricky bots, and Search Console + server logs to confirm indexing.
If you want, I can write that hydration + sitemap tutorial — what would you prefer: code-heavy step-by-step for Next.js, or framework-agnostic notes + examples? Any stack specifics (Next/Remix/Vite/Netlify) you’re on?
ngl SPAs can rank.
no credit card required