A practical way to quantify how much template-driven duplication is suppressing crawl efficiency, keyword targeting, and page-level differentiation.
Template Uniqueness Score measures how much of a page template is actually unique across URLs instead of repeated boilerplate. It matters because low-uniqueness templates waste crawl budget, blur relevance signals, and make it harder for individual pages to rank on their own.
Template Uniqueness Score (TUS) is a working metric for estimating how much of a page is unique versus repeated template chrome. It matters on large sites because 10,000 URLs built from the same shell do not give Google 10,000 distinct ranking assets.
At its simplest, TUS is the percentage of indexable page content that changes meaningfully across URLs sharing a template. That usually includes body copy, product specs, reviews, images, internal links in the main content area, and sometimes structured data. It excludes the obvious repeat offenders: global nav, footer, faceted blocks, cookie banners, related widgets, and boilerplate legal copy.
A rough formula is unique content bytes or words divided by total indexable content bytes or words, multiplied by 100. If a product template has 2,000 words of visible indexable content and 900 words are repeated across every page, the TUS is 55%. Not great.
TUS is not a Google metric. That is the first caveat. You will not find it in Google Search Console, and Google does not score pages this way internally. Still, it is useful because it gives teams a hard number for a real problem: near-duplicate templates that look different to stakeholders but not different enough to search engines.
In practice, low-TUS templates often correlate with weak long-tail coverage, index bloat, and poor crawl efficiency. Screaming Frog plus a DOM or text diff can expose this fast. On enterprise ecommerce, I usually treat below 50% as a red flag, 50-65% as workable but thin, and 65%+ as a healthier target. Not universal. But directionally reliable.
Ahrefs and Semrush will not calculate TUS directly, but they help validate the outcome. If a template rewrite raises TUS and pages start ranking for 20-30% more non-brand queries, that is the business proof. GSC is where you confirm impressions, clicks, and page-level query spread. Surfer SEO is less useful here unless you are rewriting on-page modules at scale.
More unique content does not automatically mean better rankings. I have seen teams inflate TUS by adding 400 words of useless copy to category pages. The score improved. Performance did not. Google's John Mueller has repeatedly said boilerplate is normal and duplication is not inherently a penalty issue; the problem is when pages are not distinct enough to justify separate indexing and ranking.
So use TUS as a diagnostic, not a KPI in isolation. If the template is meant to be standardized, like location pages or product variants, the fix is not always “add more text.” Sometimes it is consolidation, canonicalization, or simply noindexing low-value URLs.
<p>PAA sits in that awkward but useful layer of Google: …
A template-level cannibalization metric for finding duplicate search intent across …
<p>When filter URLs multiply faster than search demand, index coverage …
How global template edits change keyword targeting across thousands of …
<p>Hash-based URLs can quietly hide important pages from Google. If …
How to improve image discoverability for Google Lens, Google Images, …
Get expert SEO insights and automated optimizations with our platform.
Get Started Free