Growth Intermediate

Model Impression Share

A forecasting metric that converts rankings, search volume, and CTR assumptions into an estimated share of organic visibility.

Updated Apr 04, 2026

Quick Definition

Model Impression Share is an estimated visibility metric: the percentage of available organic impressions your site is likely to capture across a tracked keyword set based on current rankings and an assumed CTR curve. It matters because it turns rank tracking into market share math, which is far easier to use for forecasting, prioritization, and defending SEO budget.

Model Impression Share (MIS) estimates how much of the available organic visibility you capture across a keyword set. In plain terms, it answers a better question than average position: what share of the market are we actually getting?

The usual model is simple enough: search volume or impression potential multiplied by expected CTR at your current rank, then divided by total available impressions in the set. If your MIS is 22% on a 300,000-impression topic cluster, you are modeling that roughly 78% of the opportunity sits with competitors, SERP features, or both.

Why SEO teams use it

Average rank is weak on its own. A move from position 8 to 5 on a 20-search keyword is noise; the same move on a 40,000-search keyword is budget-worthy. MIS fixes that by weighting rankings by opportunity.

  • Forecasting: If a cluster moves from 18% to 26% MIS, you can estimate incremental clicks and revenue with a straight face.
  • Prioritization: Keywords sitting in positions 4-10 often produce the biggest MIS gains per content update or link acquisition sprint.
  • Competitive reporting: It is easier to explain “we hold 31% of category visibility” than dump 500 keyword positions into a slide.

How to calculate it properly

Most teams build MIS from rank tracking data in Ahrefs, Semrush, STAT, or a SERP API, then calibrate with Google Search Console. Screaming Frog is useful here too, not for the model itself, but for mapping keywords to URLs and spotting cannibalization that distorts the output.

A practical formula looks like this:

MIS = sum(keyword impression potential × expected CTR at current rank) / sum(keyword impression potential)

Use your own CTR curve if possible. GSC query and page data is usually the best starting point because generic CTR studies age badly. A 2022 curve is not reliable in a 2026 SERP full of ads, AI Overviews, video packs, and People Also Ask.

Where MIS breaks down

This metric is only as good as its assumptions. That is the caveat people skip.

  • CTR curves are unstable: Brand bias, SERP features, device mix, and query intent can wreck a blended model.
  • Search volume is approximate: Ahrefs, Semrush, and Moz all model volume differently, and low-volume terms are often wrong by a lot.
  • Rank tracking is not reality: Personalization, localization, and volatile SERPs mean your “position 3” may not be what users actually see.

Google's John Mueller has repeatedly said rankings are not fixed, universal positions, and that matters here. MIS is a directional planning metric, not an accounting metric. Treat it like a forecast model, not ground truth.

Best use cases

MIS works best for non-branded topic clusters, category-level reporting, and quarterly planning. It is especially useful when you need to compare content hubs, countries, or product lines on the same scale.

It is less useful for tiny keyword sets, news-driven SERPs, or anything dominated by SERP features that steal clicks. If AI Overviews suppress organic CTR by 15-30% for a query class, your old MIS model will overstate opportunity unless you adjust for that explicitly.

Bottom line: MIS is one of the better growth metrics in SEO because it connects rankings to market share. Just do not pretend the model is cleaner than the data feeding it.

Frequently Asked Questions

How is Model Impression Share different from Share of Voice?
They are close, and many teams use the terms loosely. MIS usually emphasizes estimated impression capture from rankings and CTR assumptions, while Share of Voice in tools like Semrush or Ahrefs may use their own proprietary visibility formulas. The distinction matters when you report numbers to leadership, because the methodology changes the output.
What data sources should I use to build MIS?
Use rank data from Ahrefs, Semrush, STAT, or a SERP API, then calibrate with Google Search Console impressions and clicks. Search volume can come from Ahrefs, Semrush, or Moz, but pick one source and stay consistent. Mixing vendors mid-quarter makes trend lines messy.
Should branded keywords be included in MIS?
Usually no, at least not in the main growth view. Branded terms inflate MIS and can hide weak non-branded performance because branded CTR is abnormally high and rankings are often stable. Keep branded and non-branded MIS in separate cuts.
How often should MIS be updated?
Weekly is enough for most B2B and mid-volume programs. Daily updates make sense for ecommerce, publishers, or volatile SERPs where rankings move fast. Monthly is too slow if you want MIS to drive prioritization.
Can MIS be trusted for traffic forecasting?
Trusted, yes; taken literally, no. It is useful for directional forecasting and scenario planning, especially when paired with GSC baselines and conversion data. It gets weaker when CTR shifts because of ads, AI Overviews, or heavy SERP feature changes.
What is a good MIS target?
There is no universal benchmark because keyword sets and SERP conditions vary too much. In practice, moving a non-branded cluster from 15% to 25% MIS is often meaningful, while 40%+ in competitive categories usually requires top-3 rankings across a large share of terms. Focus on delta and business impact, not vanity thresholds.

Self-Check

Are we using a CTR curve based on our own GSC data, or a generic study that ignores current SERP features?

Have we separated branded and non-branded keywords so MIS reflects real growth opportunity?

Which keyword clusters show the highest projected MIS gain from moving positions 4-10 into the top 3?

Are we treating MIS as a forecast model, or mistakenly presenting it as exact impression share?

Common Mistakes

❌ Using one blended CTR curve for all queries, despite obvious differences by brand, device, intent, and SERP feature mix

❌ Combining search volume from one tool with rank data from another without checking methodology drift

❌ Reporting MIS at site level only, which hides weak categories and makes prioritization useless

❌ Treating MIS gains as guaranteed traffic gains even when AI Overviews or ads are suppressing organic clicks

All Keywords

model impression share seo visibility metric share of voice seo organic impression share seo forecasting ctr curve seo keyword opportunity modeling google search console ctr rank tracking metrics seo market share non-branded visibility seo prioritization

Ready to Implement Model Impression Share?

Get expert SEO insights and automated optimizations with our platform.

Get Started Free