Search Engine Optimization Intermediate

URL Fragment Indexing

<p>Hash-based URLs can quietly hide important pages from Google. If content lives behind # states instead of real URLs, indexation, internal linking, and SEO reporting usually get messy fast.</p>

Updated Apr 26, 2026
Screenshot illustrating URL fragment behavior in search or browser context
Screenshot related to URL fragment behavior and indexing context. Source: ahrefs.com

Quick Definition

<p>URL fragment indexing is the idea that content after a # in a URL can rank like its own page. In most SEO situations, that’s the wrong model: Google usually treats the base URL as the page and ignores the fragment as a separate indexable document.</p>

I keep running into this on audits because hash URLs look deceptively page-like. They feel real. They copy like real URLs, they change when users click around, and product teams often assume Google will treat each state as its own page. Usually it won’t.

I used to be a little softer on this point. Years ago, if a JavaScript app rendered well enough, I’d sometimes say, “Google can probably figure it out.” After enough migrations, enough Search Console investigations, and one painful ecommerce rebuild where category depth vanished from the index almost overnight, I revised that view. If a page matters for organic search, I want it on a real URL. Full stop.

Quick definition

URL fragment indexing is the belief that the part of a URL after # can be indexed and ranked like a separate page. In most cases, search engines treat the main URL as the page and do not rely on fragments as unique crawlable documents.

A fragment identifier is the part after the #, like:

  • https://example.com/page#pricing
  • https://example.com/category#red-shoes
  • https://example.com/app#/products

Those three examples are not equal from an SEO perspective, but they share the same core issue: the fragment is mainly a browser-side instruction, not a server-level document request.

That distinction matters more than most teams expect.

Why fragments usually don’t behave like pages

Here’s the simplest way I explain it to clients:

  • A path is a page candidate: example.com/products/shoes
  • A query parameter can also define a crawlable state: example.com/products?color=red
  • A fragment usually just points within the current document: example.com/products#red

The browser handles fragments client-side. In normal web requests, the fragment is not sent to the server. So when Googlebot requests a URL, the server generally sees /products, not /products#red.

That’s the part many SEO plans accidentally build on top of. Bad foundation.

Google has been pretty consistent here. Search Central documentation around URL structure and JavaScript SEO has long treated fragments as something you should not depend on for unique indexable content. And the old AJAX crawling pattern with #! and _escaped_fragment_? Deprecated a long time ago for a reason. The platform moved on.

(Quick caveat: Google can render a lot of JavaScript now, and yes, sometimes it can understand more than people give it credit for. But that does not mean hash states are a safe substitute for distinct URLs.)

My mental model was wrong here for a while. I used to lump “renderable” and “indexable as separate URLs” into the same bucket. They’re not the same problem. Rendering means Google can see content. Indexing means Google treats something as its own document.

Different question.

Real-world example: the Shopify-adjacent rebuild that looked fine until it didn’t

A few years back, I looked at a storefront that had rebuilt parts of category navigation with a JS layer. Not a pure SPA, more of a hybrid mess — which is honestly where a lot of SEO damage happens. Users clicked color and use-case filters, and the interface updated smoothly. Nice UX. Fast enough. The URLs changed too, which made the dev team feel safe.

The problem: those states were pushed into fragments.

So instead of clean crawlable URLs for high-intent category combinations, the site was producing things like:

  • /running-shoes#men
  • /running-shoes#trail
  • /running-shoes#waterproof

To the team, those were basically landing pages. To Google, the signal set kept collapsing back to /running-shoes.

We saw the symptoms before we saw the cause. Search Console indexed-page counts were lower than the merchandising team expected. Screaming Frog found far fewer meaningful URLs than the site visually presented. Organic landing pages were consolidating around broad category roots instead of the commercial subcategories the business wanted to rank.

I remember one debugging session clearly because it was one of those annoying ones where nothing looked broken in the browser. Everything worked for users. That’s what made it dangerous. I opened DevTools, checked requests, clicked filter states, and watched the app mutate the fragment without generating crawlable endpoints. Then I compared the rendered states against what the crawler could actually collect.

There it was.

Not a content problem. Not a links problem. A URL architecture problem.

Once the important category states were moved to real paths and the internal linking reflected those paths, indexing became much more predictable. Not magic. Just predictable. (Side note: this is why I get nervous when a team says “the framework handles routing for us” — sometimes it does, sometimes it absolutely does not in the way SEO needs.)

When fragments are fine

I don’t want to overcorrect here. Fragments are not evil. They’re useful.

They’re usually fine for:

  • jumping to a section on a long page
  • opening tabs or accordions for users
  • preserving UI state that does not need to rank
  • linking directly to reviews, specs, or FAQs within an already indexable page

Example:

  • https://example.com/product/widget-1#reviews
  • https://example.com/docs/api#authentication

In both cases, the intended indexable page is the base URL:

  • /product/widget-1
  • /docs/api

The fragment just improves usability. Good. Keep it.

The problem starts when a team expects this:

  • example.com/services#seo
  • example.com/services#ppc
  • example.com/services#content-marketing

…to behave like three service pages.

That assumption is where rankings die quietly.

Where URL fragment indexing causes real SEO damage

This shows up most often in older SPAs, rushed migrations, faceted navigation, and “SEO-friendly” frontend rebuilds that were never actually checked with a crawler.

Common examples:

  • example.com/#/category/shoes
  • example.com/#/product/123
  • example.com/jobs#berlin
  • example.com/locations#chicago
  • example.com/blog#seo

To users, these feel like separate destinations. To search engines, they are often just states of one underlying document.

The biggest losses usually happen in these templates:

1. Category and faceted pages

This is expensive because category demand often maps directly to revenue. If “red running shoes,” “men’s trail shoes,” or “sofa bed with storage” exist only as fragment states, you’re hiding commercially useful pages behind a mechanism Google usually won’t treat as standalone URLs.

I’ve seen teams assume that because users can share the URL, the page must be indexable. That’s not how it works.

If a filtered state has search demand and business value, decide whether it deserves a crawlable URL pattern. If yes, give it one. If no, keep it as UX state and stop expecting SEO output from it.

Intentionality matters here.

2. Location pages

I still see location selectors built with tabs or map interactions that update hashes like #boston or #dallas. Then the company wonders why local organic visibility is weak despite “having pages” for every city.

You don’t have pages. You have states.

That sounds harsh, but it’s the right framing.

3. Product and article archives in JS apps

Some apps expose inventory, archives, or help-center sections entirely through fragment routing. The UI looks rich. The crawl graph looks tiny.

That mismatch is one of the easiest ways to burn months.

4. Internal search and sort/filter combinations

Sometimes that’s fine — many of those states should not be indexed anyway. But teams need to decide that on purpose. I’ve seen stores accidentally hide the few filter combinations that did deserve indexation while generating endless low-value client-side states that never could rank in the first place.

(Edit, mid-thought — actually, this is where I see the most confusion: people mix up “users need this state” with “Google should index this state.” Those are related, not identical.)

How Google treats fragments in practice

Here’s the rule I use:

If you want something to rank as its own page, give it its own non-fragment URL.

Usually that means one of these patterns:

  • /services/seo
  • /locations/chicago
  • /help/pricing
  • /category/shoes?color=red if that filtered state should be crawlable

Google can render JavaScript. Sometimes very well. But rendering JavaScript is not the same as assigning independent indexable identity to every hash-based state in your app.

And if your canonicals, sitemaps, internal links, and hreflang all point to the base URL while the app presents dozens of fragment states, Google gets a very clear message: there is one real page here.

Signs you have a fragment indexing problem

I’d check for this if any of the following are true:

  1. Your app navigates users into #/ states.
  2. Important internal links rely on hash URLs instead of standard crawlable href destinations.
  3. Google Search Console shows far fewer indexed pages than the business thinks exist.
  4. Screaming Frog’s standard crawl finds only a fraction of the site users can see.
  5. Critical content appears only after client-side fragment changes.
  6. Canonical tags keep pointing to the same base URL regardless of visible state.
  7. XML sitemaps list clean URLs, but real user flows mostly land in hash-based states.

One practical workflow I use:

  • Crawl the site normally.
  • Crawl it again with JavaScript rendering.
  • Compare the URL sets.
  • Inspect a few “pages” that only seem to exist behind fragments.
  • Check URL Inspection in Search Console to see what Google considers canonical.

That process catches a lot.

How to fix URL fragment indexing issues

The fix is usually architectural, not cosmetic.

1. Move important content to real URLs

If the content should rank, give it a real path or a controlled query-based URL.

Examples:

  • example.com/#/services/seoexample.com/services/seo
  • example.com/products#bootsexample.com/products/boots if it is a true category page
  • example.com/locations#chicagoexample.com/locations/chicago

This is the big one. Everything else is support.

2. Replace hash routing with History API routing

Modern frameworks support clean routing. Use it. A URL like /app/products gives search engines something far more stable than /#/products when that page is meant to earn organic traffic.

3. Improve rendering strategy

SSR, static generation, or sensible pre-rendering can reduce risk for JavaScript-heavy sites. I’m not dogmatic about which implementation a team chooses. I care that important content appears reliably on unique URLs.

4. Fix internal linking

Important pages should be linked with normal anchor tags and crawlable href values. If navigation depends on JS events that mutate a fragment, you’re making discovery harder than it needs to be.

5. Align canonicals, sitemaps, and hreflang

These systems should point to the actual indexable version. Fragments should not be the backbone of your SEO architecture.

Decision tree

Use this when you’re unsure whether a fragment is acceptable:

Question 1: Do I want this exact state to appear as a standalone Google result?

  • No → A fragment may be fine.
  • Yes → Go to Question 2.

Question 2: Does this state have unique content, intent, or commercial value?

  • No → Keep it as on-page UX state.
  • Yes → Go to Question 3.

Question 3: Can the state be accessed on a unique non-fragment URL?

  • No → Rebuild routing or template logic.
  • Yes → Go to Question 4.

Question 4: Do internal links, canonicals, sitemap entries, and rendering all support that URL?

  • No → Fix those signals before expecting SEO performance.
  • Yes → It’s a valid candidate for indexation.

Short version: if it should rank, don’t hide it behind #.

Common mistakes

Treating tabs as separate pages

A pricing tab, FAQ tab, or specs tab is usually not its own SEO page just because the URL becomes #pricing or #faq.

Assuming “Google renders JS” solves architecture

It doesn’t. Helpful capability, wrong conclusion.

Hiding valuable faceted states in fragments

This one hurts ecommerce sites the most.

Using hash routing for pages that need organic traffic

Especially with old SPA patterns and lazy migrations.

Pointing all canonicals to the base URL

Then wondering why all signals consolidate there.

Confusing shareable states with indexable documents

Users being able to copy a URL does not mean Google will treat it as a distinct page.

Self-check

Ask yourself:

  • If Google ignored everything after #, would my important pages still exist as crawlable URLs?
  • Can a crawler discover those pages without simulating app behavior endlessly?
  • Do my internal links point to real documents or just UI states?
  • Are my best revenue-driving category or location pages hidden behind fragments?
  • Does Search Console’s indexed count roughly match the set of pages I think should rank?

If those answers make you uncomfortable, good. That discomfort usually points to the real work.

FAQ

Does Google index URL fragments?

Usually not as separate pages. Google generally treats the main URL as the indexable document and ignores the fragment as a unique page identifier.

Are anchor links like #reviews bad for SEO?

No. They’re usually fine when they just jump users to a section on an already crawlable page.

What about #! hashbang URLs?

That old AJAX crawling model is outdated. I would not build modern SEO-critical architecture around it.

Can a JavaScript SPA rank if it uses fragments?

Parts of it can rank, but relying on fragment states for unique page indexation is risky. If pages matter, expose them on real URLs.

Should filter pages use query parameters or paths instead of fragments?

If the filtered state should be crawlable, yes, usually use a controlled query or path pattern. If it should not be indexed, keeping it out of crawlable URL space may be fine.

Do fragments affect crawl budget?

Indirectly, the bigger issue is usually not wasted crawl budget but missing crawlable documents. Google may just see fewer real pages than you think exist.

How do I test whether fragments are causing indexation issues?

Use Screaming Frog in normal and JS modes, inspect rendered output, review internal links, and use Search Console URL Inspection to see what Google treats as the canonical page.

Is the History API better than hash routing for SEO?

Yes, when you need unique URLs for important content. It gives you cleaner, more crawlable routing patterns.

Bottom line

URL fragment indexing is one of those ideas that sounds plausible until you look at how the web actually works. Fragments are useful for navigation within a page and for UI state. They are not a dependable foundation for SEO pages.

If your growth depends on content behind #, I’d assume you have an architecture problem until proven otherwise. And in my experience, it usually is…

Diagram or screenshot potentially related to URL fragment indexing
Image from a Semrush results page that may relate to URL fragment indexing. Source: semrush.com

Real-World Examples

https://developers.google.com/search/docs/crawling-indexing/javascript/javascript-seo-basics

What's happening: Google explains core JavaScript SEO practices, including the need to make content discoverable and to use proper URL patterns for crawlable pages.

What to do: If your important content only appears after a hash change, redesign routing so each SEO-relevant view has a real URL. Use this documentation as a baseline for implementation reviews.

https://developers.google.com/search/blog/2015/10/deprecating-our-ajax-crawling-scheme

What's happening: Google announced the deprecation of the old AJAX crawling scheme that used hashbang URLs and `_escaped_fragment_` handling.

What to do: Treat this as a signal that legacy `#!` architectures are outdated. Migrate key pages to standard URLs supported by modern rendering and internal linking practices.

https://www.w3.org/TR/2012/WD-url-20120524/#url-fragment-string

What's happening: The W3C URL specification describes fragments as the part of the URL used to identify a secondary resource or location within a resource, reinforcing that they are handled differently from the main requested URL.

What to do: Use fragments for navigation within a document, not as the primary identifier for SEO landing pages. Keep ranking targets on path-based or otherwise crawlable URLs.

https://www.screamingfrog.co.uk/seo-spider/tutorials/crawl-javascript-seo-websites/

What's happening: Screaming Frog documents how to crawl JavaScript websites and compare what is discovered under standard and rendered crawl modes.

What to do: Use crawling tests to confirm whether your site's important views exist as actual URLs or only as client-side states. This is often the fastest way to surface fragment routing problems.

How different URL parts usually behave for SEO

URL pattern Typical purpose Usually crawlable as separate URL? Usually indexable as separate page? Best use case
/pagePrimary document pathYesYesMain content pages
/page?color=redFiltered or parameterized stateUsually yesSometimes, depending on strategyFacets, tracking-free variants, controlled search pages
/page#reviewsIn-page jump linkNo, generally treated as same pageNo, usually same indexed documentNavigation to sections on a page
/#/productsHash-based client-side routeUnreliable for SEO as unique pageUsually no as a separate pageLegacy SPA behavior, not ideal for SEO pages
/products/shoesDedicated resource pathYesYesCategory, product, service, or location pages

When does this apply?

Should this content live behind a fragment?

  • If the content only helps users jump to a section of an existing page, then a fragment like #faq is usually fine.
  • If the content should rank as its own page, then give it a unique non-fragment URL.
  • If your SPA uses /#/route for key pages, then migrate those pages to clean routes using the History API or equivalent framework routing.
  • If a filtered state has organic search value, then create a crawlable URL strategy for it and define canonical/indexing rules.
  • If the state is only UI convenience and has no search value, then keeping it out of the index may be appropriate.
  • If you are unsure whether Google can see the content, then test with Google Search Console URL Inspection and a JavaScript-capable crawler before making assumptions.

Frequently Asked Questions

Does Google index content after the # in a URL?
Generally, no. Google usually treats the fragment part of a URL as a client-side reference rather than a separate page. A URL like `/page#section1` normally points to the same underlying document as `/page`. If the only thing making content unique is the fragment, that content is often not a reliable standalone candidate for indexing or ranking. For SEO purposes, important content should usually exist on a clean, non-fragment URL.
Are hash URLs bad for SEO?
They are not automatically bad, but they are risky when used for pages you want to rank independently. A hash URL is fine for jumping to a section on a page, opening tabs, or preserving interface state. It becomes an SEO problem when the hash defines category pages, product views, or other content that should appear in search results. In those cases, a normal path-based or well-controlled parameterized URL is usually a better choice.
What is the difference between a fragment and a query parameter?
A query parameter, like `?color=red`, is sent as part of the URL request and can define a distinct server response. A fragment, like `#reviews`, is usually handled in the browser and not sent to the server in the same way. Because search engines can request query-based URLs directly, they can often crawl and evaluate them as separate pages. Fragments, by contrast, are usually not dependable unique page identifiers for indexing.
Can single-page applications use fragments and still rank?
They can rank if the important content is also exposed through crawlable URLs, but relying only on fragment-based routing is usually fragile. Modern single-page applications are generally better off using clean URLs with the History API, supported by server-side rendering, static generation, or other methods that expose the content clearly. If a SPA uses `/#/route` patterns for key pages, search visibility may be much weaker than expected.
Did Google ever support hashbang URLs for crawling?
Historically, Google supported an AJAX crawling approach using `#!` and `_escaped_fragment_` for some JavaScript applications. However, Google later deprecated that system and recommended building sites with progressive enhancement and standard URLs instead. That old workflow should not be treated as current best practice. If your architecture still depends on hashbang conventions for discoverability, it is a strong sign that modernization is needed.
How can I test whether fragment-based pages are being indexed?
Start with Google Search Console and inspect the base URL and any supposed fragment states. Then crawl the site with a tool like Screaming Frog, first in standard mode and then in JavaScript rendering mode, to compare actual discovered URLs. You can also review server logs and XML sitemaps. If all important views collapse into one base URL and no separate non-fragment URLs exist, the fragment states are probably not independently indexable.
Is using # for jump links or table-of-contents links okay?
Yes. That is the classic and appropriate use of fragments. A link like `/guide#installation` helps users jump to a section within a page and does not usually create SEO issues by itself. The page `/guide` remains the indexed document, and the fragment just improves navigation. Problems arise only when teams expect each fragment state to function like a separate landing page in search.
What should I do if my ecommerce filters use fragments?
First, separate valuable filter combinations from purely navigational ones. If certain filtered states have search demand and should rank, create real crawlable URLs for them and define canonical, internal linking, and indexing rules clearly. If the filters are only for user convenience, keeping them as non-indexable UI states may be fine. The mistake is assuming fragment-based filter URLs will naturally become indexable category pages.

Self-Check

If a page state needs to rank on its own, does it have a unique non-fragment URL?

Can a crawler discover your important pages through normal href links without relying on hash changes?

Do your XML sitemaps, canonicals, and internal links all point to the same clean URL structure?

Are your fragments being used only for in-page navigation or UI state, rather than for critical SEO content?

Have you verified in Google Search Console and a crawler that important JavaScript views exist as actual crawlable URLs?

If your app uses hash routing, do you have a migration plan to History API or server-rendered routes for SEO pages?

Common Mistakes

❌ Assuming each hash state is a separate page

✅ Better approach: Teams often see `/#/product/1` and `/#/product/2` as two different pages because the app renders different content. From a search engine perspective, however, those states may not be separate crawlable documents. This leads to overestimating how much of the site is eligible for indexing and underestimating the need for real URL routing.

❌ Using fragments for revenue-driving category or location pages

✅ Better approach: A common architecture mistake is putting core SEO pages behind fragments, such as service pages, city pages, or key product categories. That can hide commercially important content from search or consolidate it into one weakly targeted URL. If a page should rank for its own topic, it generally needs its own non-fragment destination.

❌ Relying on JavaScript rendering to solve everything

✅ Better approach: Google can render a lot of JavaScript, but rendering does not guarantee that fragment-defined states will be indexed as unique pages. Some teams assume that because Google can see dynamic content, it will also infer separate URLs from hashes. That is an unsafe assumption. Crawlability, routing, canonicals, and internal links still need to reflect actual indexable pages.

❌ Mixing clean sitemap URLs with hash-based internal navigation

✅ Better approach: Some sites submit proper URLs in XML sitemaps but send users and crawlers through hash-based app routes after the first click. This creates a mismatch between declared indexable pages and the experience the crawler can actually traverse. Search engines may still struggle to discover, render, or trust the intended page set consistently.

❌ Canonicalizing everything to one base page

✅ Better approach: If every app state effectively points back to the same canonical URL, search engines receive a strong signal that there is only one page to index. This is especially harmful when the team expects fragment variations to rank separately. Canonicals should support the actual page architecture, not collapse meaningful content into one generic URL.

❌ Treating all fragment use as an SEO issue

✅ Better approach: Not all fragments are harmful. Jump links, FAQ table-of-contents links, and in-page navigation are often perfectly reasonable. The mistake goes both ways: some teams misuse fragments for indexable content, while others remove useful on-page anchors out of fear. The key distinction is whether the fragment is merely navigating within a page or attempting to define a separate page.

Ready to Implement URL Fragment Indexing?

Get expert SEO insights and automated optimizations with our platform.

Get Started Free