Join our community of websites already using SEOJuice to automate the boring SEO work.
See what our customers say and learn about sustainable SEO that drives long-term growth.
Explore the blog →TL;DR: Google's AI Overviews lower organic CTR, but the loss is concentrated on a specific subset of queries (long, question-shaped, informational) and on pages that rank highly without being cited inside the AI panel itself. Getting cited inside the panel recovers roughly half the lost CTR, which is why the citation slot is best understood as the new featured snippet, not as a lost surface. This piece walks through the data, the four optimization tactics with measurable evidence behind them, and a portfolio triage you can run this week.
The headline number from Pew Research's March 2025 study is that organic CTR drops from 15 percent to 8 percent on result pages where a Google AI Overview appears. That number is real, the methodology is solid, and it has been quoted by every SEO publication for a year. I quoted it too in the previous version of this article, and I led the piece with the framing that clicks were halving across the board.
That framing is what I want to revise. The Pew headline averages across query types where AIO appears 8 percent of the time and query types where it appears 53 percent of the time. It averages across pages that get cited inside the AI panel and pages that do not. It averages across branded queries — which the model treats kindly — and pure-informational queries where the model often substitutes its own answer. The headline is true at the level of generality it was reported at. It is not the decision a portfolio operator should make in 2026.
Ahrefs ran a segmented study across millions of keywords in 2025 and 2026, and Authoritas ran an enterprise-CTR cut on agency client portfolios. Pulling the consistent shape across both: the loss is concentrated, not spread evenly.

| Query intent | AIO appearance rate | CTR delta when AIO present | Reader implication |
|---|---|---|---|
| Informational, question-shaped | ~53% | -34% to -64% | Biggest exposure. Rewrite candidate. |
| Commercial / how-to product | ~25% | -12% to -22% | Moderate. Worth optimization. |
| Branded | ~15% | Flat to +18% | Often net positive. Leave alone. |
| Navigational / transactional | ~8% | ~0% | Not an AIO problem; the panel rarely appears. |
The "AI Overviews kill SEO" headline averages across query types where AIO appears 8 percent of the time and types where it appears more than half the time. Until you weight by intent, the average is meaningless. The CTR loss is real on long question-shaped informational queries. It is barely visible on transactional queries where AIO does not appear in the first place.
— Lily Ray, paraphrased from her 2025–2026 commentary on AI Overview CTR studies
The reader-grade takeaway: before you rewrite anything, pull a query-level export from Google Search Console and bucket your queries into these four groups. The ones you should worry about are the long, question-shaped informational queries. The branded queries are doing fine. If your business runs mostly on transactional intent, this entire conversation might be less urgent than the headlines suggest. The strategic shape of this argument is sketched in our SEO-to-GEO piece; the segmentation here is the empirical version of it.
One reason the "clicks are halving" framing feels overwhelming is that it treats the AI panel as a single black box that ate your traffic. It is not a black box. There are four distinct positions inside an AIO panel that a publisher can compete for, and the optimization tactics behave differently for each.

Position one: the answer text source. Google's model paraphrases content from one or two pages into the generated paragraph at the top of the panel. Being that source means your text shapes the answer the reader sees, even if your citation card is buried below.
Position two: the citation card. A small named card, usually below or next to the answer text, links to the source page. This is the surface that produces the trickle of click-throughs Pew measured at roughly one percent. It is also the surface that buys you the brand-association lift Authoritas measured on cited pages.
Position three: the follow-up chip. The panel often suggests two or three related questions a reader might want next. If the model pulls a related question that your content already answers cleanly, you get a second shot at being cited when the reader clicks the chip.
Position four: the in-copy brand mention. Sometimes the generated answer text names your brand inside the paragraph without producing a citation card. There is no link, but the brand association is there, and on branded follow-up queries this position correlates with the lift Ahrefs observed on branded query CTR.
Here is the thesis of this piece, and I want to state it plainly because the rest of the article hangs on it. The citation slot inside an AI Overview is the next iteration of the featured-snippet skill. The mechanics are the same: a generative surface paraphrases content from a small number of pages, surfaces a named link to one or two of them, and rewards the publishers who structure their content for that paraphrase. The interface changed; the optimization muscle is the same one.
Authoritas's 2025–2026 enterprise CTR cut showed that pages ranking at position one without being cited inside the AIO panel lost the most CTR, somewhere between 19 and 23 percentage points in absolute terms, depending on the cut. Pages ranking at position one and also being cited inside the panel lost about half of that, roughly 9 to 13 points. The citation slot is doing the recovery work. Not winning it is the worst-of-both-worlds — you ranked, you held the position, you still lost the click because the reader's eyes never made it past the AI panel.
This shape will be familiar to anyone who worked on featured-snippet optimization in 2017 and 2018. Same pattern: a new surface above the blue links, dominated by a small number of structurally-friendly pages, with disproportionate reward for understanding the paraphrase mechanics. The featured-snippet optimization playbook we wrote a few years back covers the foundations, and the tactics in the next section are the AIO-specific extensions of the same skill.
I want to be specific about what the data supports. Twelve-tactic listicles are unhelpful when half the tactics are vibes-grade and half have real evidence behind them. Here are the four moves with the strongest evidence-to-effort ratio in mid-2026, ranked.

1. Direct-answer block at the top of the page (HIGH evidence, LOW effort). A 40–60 word paragraph immediately under the H1, written as a direct answer to the page's primary query, with the answer in the first sentence rather than buried. This is the single tactic with the most consistent correlation to citation-card appearance across Search Engine Land's tracking and Ahrefs' segment data. It works because the model is looking for a paraphrasable answer block, and giving it one that is also accurate buys you the citation. Effort is genuinely low: most pages already have an intro paragraph that can be rewritten in 20 minutes.
2. FAQ schema on question-shaped queries (HIGH evidence, MEDIUM effort). Pages with proper FAQ schema, the JSON-LD flavor using `Question` and `acceptedAnswer` types, show higher citation rates on the long question-shaped queries that trigger AIO most often. The mechanism is straightforward: the model has a structured signal that this page answers a specific question, and the follow-up chips often pull from FAQ schema directly. Medium effort because you need to write the FAQ block, validate the JSON-LD, and confirm it's rendering server-side. Client-side-rendered JSON-LD is missed in some crawler passes, so server-side render or hardcode it.
3. Unblock the AI crawlers in robots.txt and your CDN (HIGH evidence, LOW effort, prerequisite). If GPTBot, OAI-SearchBot, PerplexityBot, or Google-Extended is blocked at the `robots.txt` level or via a Cloudflare "block AI scrapers" toggle, the model cannot read your content and cannot cite you. This is a hygiene precondition, not really a tactic. Cloudflare's default AI-bot-block setting got switched on for a lot of sites in 2024 and 2025; check yours. Same for the robots-meta tag on individual templates.
4. Brand-mention work inside the first 200 words (MEDIUM evidence, MEDIUM effort). The Ahrefs branded-query lift correlates with pages that mention the brand by name inside the first 200 words, in a context that matches the query's intent. On branded follow-up queries, where AIO is shown but the reader is searching for your brand specifically, the AIO impact runs as a tailwind rather than a headwind, but only if your brand actually appears in citation-ready position on the page. For branded comparison queries this is the highest-payoff move. The brand-citation playbook covers the cross-engine version of this.
Tactics with weaker evidence that I am not including here, in case you're wondering: schema markup soup, the kitchen-sink-every-schema-type approach; AI-generated meta descriptions; "EEAT signals" generally without a specific mechanism; and structured-data redundancy. None of these correlate cleanly with citation appearance in the data I've seen. They might help; they might be cargo cult. Either way they are not where I'd start.
If you run 200 pages, you are not rewriting all of them. The optimization cost is the bottleneck, not the tactical knowledge. The triage I run on a portfolio looks like this.
Open the GSC query export and filter for pages where impressions held steady but CTR dropped during the period when AIO appearance broadened on your query set, Q3 2025 onward for most portfolios. Bucket the affected pages into three groups based on whether AIO currently appears for the queries those pages rank for and whether the page is cited inside the panel when AIO does appear.
| Bucket | Signal | Action |
|---|---|---|
| AIO present, page cited | CTR dropped a little but stabilized | Hold. The page is doing what it can. |
| AIO present, page NOT cited | CTR dropped a lot and stayed down | Rewrite priority 1. Apply the four tactics. |
| AIO not present | CTR stable | Leave alone. This is not your AIO problem. |
The honest version of this advice: stop trying to rewrite every page on a portfolio for AI Overviews. Most of the queries you care about do not trigger an AIO panel at all. The payoff sits in the 20 percent of pages where AIO appears and the page is not cited. Everything else is noise on the optimization budget.
On most portfolios I look at, the "AIO present, page not cited" bucket is between 10 and 25 percent of pages. That bucket absorbs the majority of the CTR loss. Concentrating rewrite effort there is the highest-payoff move, and it scales: one editor can run the four tactics on roughly five pages a day. A 200-page portfolio with 20 percent in the priority bucket is 40 pages, which is eight working days of focused rewrite effort. For the deeper version of this triage, our content-refresh strategy piece covers the broader prioritization framework.
I want to be honest about the residual loss. The four tactics above recover roughly half the CTR loss on pages that are rewrite candidates. They do not recover all of it. Two situations where the playbook hits its limit:
First, pure-informational queries on broad topics where Wikipedia, Reddit, and YouTube dominate the citation slots. Marie Haynes has called these the "new top three" of any informational SERP, and she is roughly right for the broad-topic case. A small publisher writing a generalist explainer on a popular topic is unlikely to displace Wikipedia in the citation slot, no matter how well-structured the answer block is. The honest move is to either skip the topic or write something so specific that the broad-topic citations are not in the same competition lane.
Second, the recovery is partial even when it works. Authoritas's data is clear that citation recovers roughly half the lost CTR, not all of it. The fundamental change is that an AI panel above the blue links absorbs reader attention; even an optimal cited page sees fewer clicks than a pre-AIO position-one page did. The reallocation question — how much investment goes into AIO-aware rewriting versus brand work or off-platform community — is the harder strategic call. Our multi-source brand-citation playbook covers what an AI-first portfolio looks like when you treat AIO as a peer surface rather than as a Google-replacement surface.
No. The Pew 15-to-8 headline is an aggregate across all SERPs where AIO appeared in March 2025. Ahrefs' segmented cut shows that branded queries are net positive, with CTR up to +18 percent; commercial queries lose 12–22 percent; and pure-informational queries lose 34–64 percent. Your exposure depends entirely on your query mix. Treat "halves clicks" as a worst-case headline for one specific subset of queries, not as a universal forecast.
On the citation axis, no. Blocking the AI crawlers removes you from the model's pool of citable sources, which is the only inbound traffic these surfaces currently produce. On the training-data axis it is a policy choice; GPTBot specifically governs training rather than live retrieval, and blocking it has no effect on whether ChatGPT cites you in response to a user query. The most common configuration in 2026 is to allow OAI-SearchBot, PerplexityBot, and Google-Extended, the bot that governs Gemini retrieval, and leave GPTBot as a separate decision based on your training-data stance.
Partially. Ahrefs' segmented data shows branded queries with AIO present often see a small CTR lift rather than a loss, because the model surfaces brand pages as canonical answers. If your traffic is dominated by branded queries, your AIO exposure is lower and the net effect may be slightly positive. If your traffic is dominated by long-tail informational queries, branded query strength helps cross-engine citation rates over time but does not offset the immediate CTR loss on the informational pages.
It helps a lot on question-shaped queries but is not sufficient as a standalone tactic. The strongest correlation is FAQ schema plus a direct-answer block plus brand mentions in citation-ready position. FAQ schema alone often moves a page from "not cited" to "occasionally cited," not from "not cited" to "consistently cited." It is the second tactic on the list, not the first, for that reason. If you need the mechanism deep-dive, our SERP snippet-indexing piece covers how structured-data signals flow into the citation pipeline.
Match impressions against CTR by query. If impressions held steady, meaning the page is still ranking, and CTR dropped on specific query types, especially long question-shaped ones, AIO is the likely culprit. If impressions also dropped, you are looking at a ranking or quality shift, not pure AIO substitution. The cleanest signal is per-query CTR over a 12-month rolling window: pages where the CTR curve breaks cleanly downward in mid-2024 and stays low are AIO-exposed; pages with gradual decay or sudden cliffs are usually something else.
If you only have an afternoon: pull your GSC query export, sort by impressions descending, and identify the top 20 informational query-page combinations where CTR dropped sharply since mid-2024. Run the four tactics on those pages. That is the highest-payoff move available to most portfolios in 2026, and it will tell you within four to six weeks whether the playbook works on your content shape.
If you want to monitor citation share across engines without standing up your own scraper, the SEOJuice dashboard tracks per-engine citation counts. There is no email gate on the basic view; use it or do not. The data is the same either way, and the decision is yours to make against your portfolio's actual numbers rather than against the headline number you saw on a tech blog last quarter.
<script type="application/ld+json"> { "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [ { "@type": "Question", "name": "Do AI Overviews really halve every page's clicks?", "acceptedAnswer": { "@type": "Answer", "text": "No. The Pew 15-to-8 headline is an aggregate across all SERPs where AIO appeared in March 2025. Ahrefs' segmented cut shows that branded queries are net positive (CTR up to +18 percent), commercial queries lose 12-22 percent, and pure-informational queries lose 34-64 percent. Your exposure depends entirely on your query mix." } }, { "@type": "Question", "name": "Should I block GPTBot or PerplexityBot to protect my content?", "acceptedAnswer": { "@type": "Answer", "text": "On the citation axis, no. Blocking the AI crawlers removes you from the model's pool of citable sources, which is the only inbound traffic these surfaces currently produce. The most common configuration in 2026 is to allow OAI-SearchBot, PerplexityBot, and Google-Extended." } }, { "@type": "Question", "name": "Will branded queries save me?", "acceptedAnswer": { "@type": "Answer", "text": "Partially. Ahrefs' segmented data shows branded queries with AIO present often see a small CTR lift rather than a loss, because the model surfaces brand pages as canonical answers. If your traffic is dominated by branded queries, your AIO exposure is lower and the net effect may be slightly positive." } }, { "@type": "Question", "name": "Is FAQ schema enough on its own?", "acceptedAnswer": { "@type": "Answer", "text": "It helps a lot on question-shaped queries but is not sufficient as a standalone tactic. The strongest correlation is FAQ schema plus a direct-answer block plus brand mentions in citation-ready position. FAQ schema alone often moves a page from 'not cited' to 'occasionally cited,' not from 'not cited' to 'consistently cited.'" } }, { "@type": "Question", "name": "How do I tell whether my CTR drop is AIO or a regular ranking shift?", "acceptedAnswer": { "@type": "Answer", "text": "Match impressions against CTR by query. If impressions held steady (the page is still ranking) and CTR dropped on specific query types (especially long question-shaped queries), AIO is the likely culprit. If impressions also dropped, you are looking at a ranking or quality shift, not pure AIO substitution." } } ] } </script>no credit card required