<p>Hash-based URLs can quietly hide important pages from Google. If content lives behind # states instead of real URLs, indexation, internal linking, and SEO reporting usually get messy fast.</p>
<p>URL fragment indexing is the idea that content after a # in a URL can rank like its own page. In most SEO situations, that’s the wrong model: Google usually treats the base URL as the page and ignores the fragment as a separate indexable document.</p>
I keep running into this on audits because hash URLs look deceptively page-like. They feel real. They copy like real URLs, they change when users click around, and product teams often assume Google will treat each state as its own page. Usually it won’t.
I used to be a little softer on this point. Years ago, if a JavaScript app rendered well enough, I’d sometimes say, “Google can probably figure it out.” After enough migrations, enough Search Console investigations, and one painful ecommerce rebuild where category depth vanished from the index almost overnight, I revised that view. If a page matters for organic search, I want it on a real URL. Full stop.
URL fragment indexing is the belief that the part of a URL after # can be indexed and ranked like a separate page. In most cases, search engines treat the main URL as the page and do not rely on fragments as unique crawlable documents.
A fragment identifier is the part after the #, like:
https://example.com/page#pricinghttps://example.com/category#red-shoeshttps://example.com/app#/productsThose three examples are not equal from an SEO perspective, but they share the same core issue: the fragment is mainly a browser-side instruction, not a server-level document request.
That distinction matters more than most teams expect.
Here’s the simplest way I explain it to clients:
example.com/products/shoesexample.com/products?color=redexample.com/products#redThe browser handles fragments client-side. In normal web requests, the fragment is not sent to the server. So when Googlebot requests a URL, the server generally sees /products, not /products#red.
That’s the part many SEO plans accidentally build on top of. Bad foundation.
Google has been pretty consistent here. Search Central documentation around URL structure and JavaScript SEO has long treated fragments as something you should not depend on for unique indexable content. And the old AJAX crawling pattern with #! and _escaped_fragment_? Deprecated a long time ago for a reason. The platform moved on.
(Quick caveat: Google can render a lot of JavaScript now, and yes, sometimes it can understand more than people give it credit for. But that does not mean hash states are a safe substitute for distinct URLs.)
My mental model was wrong here for a while. I used to lump “renderable” and “indexable as separate URLs” into the same bucket. They’re not the same problem. Rendering means Google can see content. Indexing means Google treats something as its own document.
Different question.
A few years back, I looked at a storefront that had rebuilt parts of category navigation with a JS layer. Not a pure SPA, more of a hybrid mess — which is honestly where a lot of SEO damage happens. Users clicked color and use-case filters, and the interface updated smoothly. Nice UX. Fast enough. The URLs changed too, which made the dev team feel safe.
The problem: those states were pushed into fragments.
So instead of clean crawlable URLs for high-intent category combinations, the site was producing things like:
/running-shoes#men/running-shoes#trail/running-shoes#waterproofTo the team, those were basically landing pages. To Google, the signal set kept collapsing back to /running-shoes.
We saw the symptoms before we saw the cause. Search Console indexed-page counts were lower than the merchandising team expected. Screaming Frog found far fewer meaningful URLs than the site visually presented. Organic landing pages were consolidating around broad category roots instead of the commercial subcategories the business wanted to rank.
I remember one debugging session clearly because it was one of those annoying ones where nothing looked broken in the browser. Everything worked for users. That’s what made it dangerous. I opened DevTools, checked requests, clicked filter states, and watched the app mutate the fragment without generating crawlable endpoints. Then I compared the rendered states against what the crawler could actually collect.
There it was.
Not a content problem. Not a links problem. A URL architecture problem.
Once the important category states were moved to real paths and the internal linking reflected those paths, indexing became much more predictable. Not magic. Just predictable. (Side note: this is why I get nervous when a team says “the framework handles routing for us” — sometimes it does, sometimes it absolutely does not in the way SEO needs.)
I don’t want to overcorrect here. Fragments are not evil. They’re useful.
They’re usually fine for:
Example:
https://example.com/product/widget-1#reviewshttps://example.com/docs/api#authenticationIn both cases, the intended indexable page is the base URL:
/product/widget-1/docs/apiThe fragment just improves usability. Good. Keep it.
The problem starts when a team expects this:
example.com/services#seoexample.com/services#ppcexample.com/services#content-marketing…to behave like three service pages.
That assumption is where rankings die quietly.
This shows up most often in older SPAs, rushed migrations, faceted navigation, and “SEO-friendly” frontend rebuilds that were never actually checked with a crawler.
Common examples:
example.com/#/category/shoesexample.com/#/product/123example.com/jobs#berlinexample.com/locations#chicagoexample.com/blog#seoTo users, these feel like separate destinations. To search engines, they are often just states of one underlying document.
The biggest losses usually happen in these templates:
This is expensive because category demand often maps directly to revenue. If “red running shoes,” “men’s trail shoes,” or “sofa bed with storage” exist only as fragment states, you’re hiding commercially useful pages behind a mechanism Google usually won’t treat as standalone URLs.
I’ve seen teams assume that because users can share the URL, the page must be indexable. That’s not how it works.
If a filtered state has search demand and business value, decide whether it deserves a crawlable URL pattern. If yes, give it one. If no, keep it as UX state and stop expecting SEO output from it.
Intentionality matters here.
I still see location selectors built with tabs or map interactions that update hashes like #boston or #dallas. Then the company wonders why local organic visibility is weak despite “having pages” for every city.
You don’t have pages. You have states.
That sounds harsh, but it’s the right framing.
Some apps expose inventory, archives, or help-center sections entirely through fragment routing. The UI looks rich. The crawl graph looks tiny.
That mismatch is one of the easiest ways to burn months.
Sometimes that’s fine — many of those states should not be indexed anyway. But teams need to decide that on purpose. I’ve seen stores accidentally hide the few filter combinations that did deserve indexation while generating endless low-value client-side states that never could rank in the first place.
(Edit, mid-thought — actually, this is where I see the most confusion: people mix up “users need this state” with “Google should index this state.” Those are related, not identical.)
Here’s the rule I use:
If you want something to rank as its own page, give it its own non-fragment URL.
Usually that means one of these patterns:
/services/seo/locations/chicago/help/pricing/category/shoes?color=red if that filtered state should be crawlableGoogle can render JavaScript. Sometimes very well. But rendering JavaScript is not the same as assigning independent indexable identity to every hash-based state in your app.
And if your canonicals, sitemaps, internal links, and hreflang all point to the base URL while the app presents dozens of fragment states, Google gets a very clear message: there is one real page here.
I’d check for this if any of the following are true:
#/ states.href destinations.One practical workflow I use:
That process catches a lot.
The fix is usually architectural, not cosmetic.
If the content should rank, give it a real path or a controlled query-based URL.
Examples:
example.com/#/services/seo → example.com/services/seoexample.com/products#boots → example.com/products/boots if it is a true category pageexample.com/locations#chicago → example.com/locations/chicagoThis is the big one. Everything else is support.
Modern frameworks support clean routing. Use it. A URL like /app/products gives search engines something far more stable than /#/products when that page is meant to earn organic traffic.
SSR, static generation, or sensible pre-rendering can reduce risk for JavaScript-heavy sites. I’m not dogmatic about which implementation a team chooses. I care that important content appears reliably on unique URLs.
Important pages should be linked with normal anchor tags and crawlable href values. If navigation depends on JS events that mutate a fragment, you’re making discovery harder than it needs to be.
These systems should point to the actual indexable version. Fragments should not be the backbone of your SEO architecture.
Use this when you’re unsure whether a fragment is acceptable:
Question 1: Do I want this exact state to appear as a standalone Google result?
Question 2: Does this state have unique content, intent, or commercial value?
Question 3: Can the state be accessed on a unique non-fragment URL?
Question 4: Do internal links, canonicals, sitemap entries, and rendering all support that URL?
Short version: if it should rank, don’t hide it behind #.
A pricing tab, FAQ tab, or specs tab is usually not its own SEO page just because the URL becomes #pricing or #faq.
It doesn’t. Helpful capability, wrong conclusion.
This one hurts ecommerce sites the most.
Especially with old SPA patterns and lazy migrations.
Then wondering why all signals consolidate there.
Users being able to copy a URL does not mean Google will treat it as a distinct page.
Ask yourself:
#, would my important pages still exist as crawlable URLs?If those answers make you uncomfortable, good. That discomfort usually points to the real work.
Usually not as separate pages. Google generally treats the main URL as the indexable document and ignores the fragment as a unique page identifier.
#reviews bad for SEO?No. They’re usually fine when they just jump users to a section on an already crawlable page.
#! hashbang URLs?That old AJAX crawling model is outdated. I would not build modern SEO-critical architecture around it.
Parts of it can rank, but relying on fragment states for unique page indexation is risky. If pages matter, expose them on real URLs.
If the filtered state should be crawlable, yes, usually use a controlled query or path pattern. If it should not be indexed, keeping it out of crawlable URL space may be fine.
Indirectly, the bigger issue is usually not wasted crawl budget but missing crawlable documents. Google may just see fewer real pages than you think exist.
Use Screaming Frog in normal and JS modes, inspect rendered output, review internal links, and use Search Console URL Inspection to see what Google treats as the canonical page.
Yes, when you need unique URLs for important content. It gives you cleaner, more crawlable routing patterns.
URL fragment indexing is one of those ideas that sounds plausible until you look at how the web actually works. Fragments are useful for navigation within a page and for UI state. They are not a dependable foundation for SEO pages.
If your growth depends on content behind #, I’d assume you have an architecture problem until proven otherwise. And in my experience, it usually is…
https://developers.google.com/search/docs/crawling-indexing/javascript/javascript-seo-basics
What's happening: Google explains core JavaScript SEO practices, including the need to make content discoverable and to use proper URL patterns for crawlable pages.
What to do: If your important content only appears after a hash change, redesign routing so each SEO-relevant view has a real URL. Use this documentation as a baseline for implementation reviews.
https://developers.google.com/search/blog/2015/10/deprecating-our-ajax-crawling-scheme
What's happening: Google announced the deprecation of the old AJAX crawling scheme that used hashbang URLs and `_escaped_fragment_` handling.
What to do: Treat this as a signal that legacy `#!` architectures are outdated. Migrate key pages to standard URLs supported by modern rendering and internal linking practices.
https://www.w3.org/TR/2012/WD-url-20120524/#url-fragment-string
What's happening: The W3C URL specification describes fragments as the part of the URL used to identify a secondary resource or location within a resource, reinforcing that they are handled differently from the main requested URL.
What to do: Use fragments for navigation within a document, not as the primary identifier for SEO landing pages. Keep ranking targets on path-based or otherwise crawlable URLs.
https://www.screamingfrog.co.uk/seo-spider/tutorials/crawl-javascript-seo-websites/
What's happening: Screaming Frog documents how to crawl JavaScript websites and compare what is discovered under standard and rendered crawl modes.
What to do: Use crawling tests to confirm whether your site's important views exist as actual URLs or only as client-side states. This is often the fastest way to surface fragment routing problems.
| URL pattern | Typical purpose | Usually crawlable as separate URL? | Usually indexable as separate page? | Best use case |
|---|---|---|---|---|
| /page | Primary document path | Yes | Yes | Main content pages |
| /page?color=red | Filtered or parameterized state | Usually yes | Sometimes, depending on strategy | Facets, tracking-free variants, controlled search pages |
| /page#reviews | In-page jump link | No, generally treated as same page | No, usually same indexed document | Navigation to sections on a page |
| /#/products | Hash-based client-side route | Unreliable for SEO as unique page | Usually no as a separate page | Legacy SPA behavior, not ideal for SEO pages |
| /products/shoes | Dedicated resource path | Yes | Yes | Category, product, service, or location pages |
#faq is usually fine./#/route for key pages, then migrate those pages to clean routes using the History API or equivalent framework routing.✅ Better approach: Teams often see `/#/product/1` and `/#/product/2` as two different pages because the app renders different content. From a search engine perspective, however, those states may not be separate crawlable documents. This leads to overestimating how much of the site is eligible for indexing and underestimating the need for real URL routing.
✅ Better approach: A common architecture mistake is putting core SEO pages behind fragments, such as service pages, city pages, or key product categories. That can hide commercially important content from search or consolidate it into one weakly targeted URL. If a page should rank for its own topic, it generally needs its own non-fragment destination.
✅ Better approach: Google can render a lot of JavaScript, but rendering does not guarantee that fragment-defined states will be indexed as unique pages. Some teams assume that because Google can see dynamic content, it will also infer separate URLs from hashes. That is an unsafe assumption. Crawlability, routing, canonicals, and internal links still need to reflect actual indexable pages.
✅ Better approach: Some sites submit proper URLs in XML sitemaps but send users and crawlers through hash-based app routes after the first click. This creates a mismatch between declared indexable pages and the experience the crawler can actually traverse. Search engines may still struggle to discover, render, or trust the intended page set consistently.
✅ Better approach: If every app state effectively points back to the same canonical URL, search engines receive a strong signal that there is only one page to index. This is especially harmful when the team expects fragment variations to rank separately. Canonicals should support the actual page architecture, not collapse meaningful content into one generic URL.
✅ Better approach: Not all fragments are harmful. Jump links, FAQ table-of-contents links, and in-page navigation are often perfectly reasonable. The mistake goes both ways: some teams misuse fragments for indexable content, while others remove useful on-page anchors out of fear. The key distinction is whether the fragment is merely navigating within a page or attempting to define a separate page.
<p>PAA sits in that awkward but useful layer of Google: …
<p>User-agent data helps separate real search crawlers from spoofed bots, …
<p>When scaled page templates outnumber genuinely differentiated pages, crawl efficiency, …
A technical SEO discipline for shrinking parameter-driven URL sprawl so …
How to rank videos in YouTube and Google by improving …
<p>When filter URLs multiply faster than search demand, index coverage …
Get expert SEO insights and automated optimizations with our platform.
Get Started Free