seojuice

Optimizing for Perplexity, ChatGPT Search, and Google AI Mode in 2026

Vadim Kravcenko
Vadim Kravcenko
Jul 14, 2025 · 12 min read

In 2026, "optimize for AI search" is a sentence that means three different things, and most publishers are writing a single playbook that mostly fits one engine by accident. I spent the back half of 2025 trying to apply one approach across Perplexity, ChatGPT Search, and Google AI Mode, and the citation share dashboards came back with three different result patterns. The pages that won in Perplexity were not the same pages that won in ChatGPT Search, and Google AI Mode rewarded an almost separate kind of work.

On mindnow we publish for B2B SaaS clients across all three engines, and on vadimkravcenko.com I run the same experiments on my own writing. seojuice.io tracks the citation share data behind both. This is not a foundational primer on AEO or GEO (those exist on this blog already); it's the per-engine operational playbook I wish I had at the start of 2025.

The shape of the work is roughly 70% shared classic SEO and 30% engine-specific. The shared 70% is where most teams should spend their first month: schema, topical depth, internal linking, brand co-occurrence, freshness. The 30% is where this article earns its keep. Smaller engines (Claude Web Search, Brave AI, You.com) are real but not yet big enough to allocate against directly; cover the big three first.

TL;DR:

  • "AI search" isn't one product. Perplexity, ChatGPT Search, and Google AI Mode reward different work, and the single-playbook content circulating in 2026 mostly serves the broadest engine (Google) by accident.
  • About 70% of the work that drives AI citations is the same classic SEO you already do: topical depth, schema, internal links, brand co-occurrence. The remaining 30% is engine-specific, and that 30% is the part most publishers misallocate.
  • A useful 30-day plan picks one engine. Do that one well, ship the shared work, then layer the next engine in month two. Trying all three at once produces three half-built playbooks.
Side-by-side matrix comparing Perplexity, ChatGPT Search, and Google AI Mode across discovery, ranking, citation behavior, and the key publisher action for each engine
The three engines at a glance: distinct crawlers, distinct ranking signals, distinct citation behavior. The shared 70% feeds all three; the 30% on each row is where the engine-specific work lives.

The three engines, side by side

Perplexity is a standalone answer engine. It crawls the open web through PerplexityBot, surfaces 4-8 inline citations per answer, and ranks heavily on freshness and on whether the source page literally answers the query under a clear heading. Answer Engine Optimization grew up around products like this, and Generative Engine Optimization is the broader umbrella for the cross-engine work.

ChatGPT Search is the in-product search experience inside ChatGPT. It crawls via OAI-SearchBot and falls back on the Bing index when its own retrieval comes up short. Citations are scarcer (1-3 per answer) but each one carries more weight, because being included at all is harder. Brand authority on the topic, meaning whether the model "knows" your brand from training and retrieval data together, matters more here than on Perplexity.

Google AI Mode is the full AI-search experience that grew out of AI Overviews and now sits over classic Google for a growing share of queries. It uses standard Googlebot; no separate user agent. The candidacy pool is drawn from the top ~20 of classic Google, then re-scored by an extraction model.

AttributePerplexityChatGPT SearchGoogle AI Mode
DiscoveryPerplexityBot, own indexOAI-SearchBot + Bing fallbackGooglebot (no separate agent)
Ranking signalLiteral Q&A match, freshnessBing rank + brand authority on topicTop ~20 of classic Google, then extraction
Citations per answer4-8 inline1-3, weightyTucked in "Sources" expandable
Action that moves itAnswer-shaped headings, allow the bot, Reddit/YouTube presenceBing index health, brand co-occurrence on the topicClassic SEO into top 20, schema for extraction

Smaller engines (Claude Web Search, Brave AI, You.com) are not absent from the picture, but they don't yet drive enough referral traffic to allocate against directly. The shared 70% covers them by spillover.

The 70% that's shared with classic SEO

The shared work is the lead, not the throwaway. Most readers will get more from improving the work that feeds all three engines than from chasing any engine-specific tactic. Skipping this section to jump to "what to do for Perplexity" is what produces three half-built playbooks.

Topical depth. Pages need to demonstrate authority on the topic, not just answer the immediate query. AI engines downweight thin "answer-only" pages from sites without surrounding topical proof. An 800-word "what is X" piece on a site with nothing else on X loses citations to a 1,500-word article on a site with a content cluster around X.

Schema markup. FAQPage, HowTo, Article, and Organization schemas help all three engines extract content cleanly. Schema is not a magic switch, but it removes friction. The lift is real and the cost is small.

Internal linking. Hub and spoke structure helps AI engines identify which page on your site is the canonical answer for a topic. A page with strong internal link signals from related content is more often selected as the citation, even when a thinner external page exists.

Brand co-occurrence. Mentions on Reddit, YouTube transcripts, LinkedIn posts, and editorial coverage build a topical association between your brand and the subject. All three engines use this signal in some form, and the source pool for it is the open web. The SEO-to-GEO bridge piece covers the mental model in depth.

Freshness. Pages updated in the last 6-12 months earn more AI citations than older pages, even when the older page outranks them in classic Google. The cheapest signal to ship: a content-refresh cadence on the 20-30 highest-value pages, with real edits, not date-bumping.

"The work that earned you classic SEO results is the same work that gets you cited in AI. The temptation is to treat AI search as a separate program. It isn't." — Lily Ray

A donut chart showing the 70% of work that's shared with classic SEO (schema, topical depth, internal linking, brand co-occurrence, freshness) versus the 30% that's engine-specific
Seventy percent of the work that drives AI citations is the classic SEO already in your roadmap. The thirty percent is the engine-specific layer.

Perplexity in practical terms

Perplexity rewards answer-shaped content more directly than any other engine. If a page has an H2 phrased as a question and a first paragraph that answers it cleanly, the page becomes citeable in a way the same content buried in narrative does not.

The first move is mechanical: confirm PerplexityBot can crawl. Many sites still block it by default, often because a blanket "disallow new bots" rule was added in 2024 and never revisited. Open robots.txt, add an explicit allow for PerplexityBot, and verify with a fetch. The AI crawler playbook covers the user agents and the verification step.

The second is structural. Take five to ten pillar articles that already rank well in classic Google and convert them to a Q&A skeleton: H2 as the question, first paragraph as the direct answer, the rest as elaboration. This is not a rewrite; it's an editorial pass on the headings and the first sentence of each section, and it can usually be done in an afternoon per article.

The third is third-party. Perplexity draws ~12-15% of its citations from Reddit and ~6-8% from YouTube transcripts, based on independent counts of the source pool. No presence on those platforms means invisibility in the share. The multisource SEO piece covers how to build that presence without comment-spam tactics.

The fourth is schema. FAQPage schema on the converted Q&A pages gives Perplexity the extraction handles it wants; small lift, small cost, compounds with the structural work.

"Perplexity rewards pages that answer the query in the first sentence under the heading. Bury the answer in narrative and you watch a thinner page win the citation." — Aleyda Solis

Annotated mockup of a Perplexity answer showing the inline citation chips, the four to eight sources surfaced per answer, and the page-level features (clear H2 question, direct first-paragraph answer) that drove each citation
What a citeable Perplexity source page looks like, mapped from the answer surface down to the page features.

ChatGPT Search and how it differs

ChatGPT Search uses OAI-SearchBot as its crawler. The first thing to do is decouple OAI-SearchBot from GPTBot in your head: they are separate user agents. GPTBot is for training; OAI-SearchBot is for the live retrieval that powers ChatGPT Search. A site can allow one and block the other, and many do. If you want ChatGPT Search citations, allow OAI-SearchBot.

Beyond the bot, ChatGPT Search behaves less like Perplexity and more like a blend of Bing and a brand-authority filter. The Bing index is the fallback retrieval layer; pages not indexed in Bing usually aren't candidates. Bing Webmaster Tools is the right submission surface for important pages.

Brand authority on the topic is the part without a clean tactical lever. The model has a sense of which brands "belong" to which topics, built from training data and live retrieval together. Building that authority is slow work: editorial coverage, podcast appearances, conference talks, named-author content. There is no shortcut. The broader ChatGPT visibility piece covers the brand-presence side in depth.

Citation behavior differs in the way that matters most for measurement. ChatGPT Search typically shows 1-3 citations per answer where Perplexity shows 4-8. A page that gets cited in ChatGPT Search has cleared a higher bar than the same page getting cited in Perplexity, and the read on whether your work is paying off comes more slowly.

One tactical note: FAQPage schema helps less here than on Perplexity. ChatGPT Search rewards depth and brand authority more than answer-shaped formatting, so a deeply researched 2,000-word piece without FAQ markup will often outperform a thin Q&A page with perfect schema. Substance first, schema second, on this engine.

Google AI Mode and classic Google overlap

Google AI Mode is the easiest of the three to think about for SEO practitioners with classic Google experience. The entry ticket is the same: rank in the top ~20 of classic Google for the query. Without that classic candidacy, AI Mode candidacy is also off the table. There is no separate bot to allow and no separate index to submit to. Standard Googlebot does the work.

The signal is classic SEO extended. E-E-A-T, schema, internal linking, backlinks, page experience: all the work that makes a page rank in classic Google also makes it a candidate for AI Mode. The extraction layer that picks the actual citations rewards pages with clean, lift-friendly content: 60-80 word direct answer paragraphs near the top of relevant sections, FAQ markup where it fits, and clear semantic structure.

Citation behavior is different in a way that hurts traffic forecasts. Citations are tucked into a "Sources" expandable section, visual prominence is low, and click-through from AI Mode is materially lower than from classic SERPs (studies in late 2025 put the click drop in the 30-50% range for queries that trigger AI Mode). Our coverage of the AI Overviews click drop tracked the same effect; AI Mode extends it.

Rank-tracking dashboards that don't separate AI-Mode-triggered queries from classic queries silently absorb the click loss into the overall traffic number. Track AI Mode CTR separately in Google Search Console, where the data is exposed.

Where to start: the 30-day allocation

The naive plan is to do all three at once, and the result is three half-built playbooks. None of them produces a compounding signal because none of them gets the depth AI citation share rewards. The better plan picks one engine after the shared work is done, and layers the next engine in month two.

Week one is the shared 70%. Schema audit on the 5-10 highest-priority pages. FAQPage markup where it fits. Internal linking cleanup so the hub pages have strong signals from their spokes. A brand-coverage check on the open web: which pages on which sites already name you alongside your topics, and where the gaps are. This work serves all three engines at once.

Weeks two through four pick one engine. Choose by where your audience already is. Reddit or YouTube presence points to Perplexity. Editorial coverage in industry publications points to ChatGPT Search. Strong classic Google rankings point to Google AI Mode, where extraction-friendly formatting is most of the work.

Month two layers the second engine. Don't restart; the shared work compounds. Month three reviews citation share and reallocates. A larger team with dedicated SEO bandwidth can run two engines in parallel from week two, but the staged plan protects against doing everything at once with a constrained team.

A four-week timeline showing week 1 as shared 70% work (schema, internal linking, brand coverage), weeks 2-4 as one chosen engine (Perplexity, ChatGPT Search, or Google AI Mode), with month 2 layering and month 3 reallocation marked
A four-week allocation that ships the shared work first, then picks one engine. Month two layers the next engine. Month three reallocates based on actual citation share.

Measurement, briefly

Most of the measurement work belongs in a separate piece (the AI visibility audit methodology covers it in depth), but the short version is worth stating. Track citation share per engine on a fixed query set, weekly, manually. A 20-query check across three engines produces 60 observations a week, which is enough to spot trends and reallocate effort.

For Perplexity, run the 20 queries through the public interface and record which of your pages appear in the citation list. Same query set in ChatGPT's search mode, citations recorded by URL. For Google AI Mode, use Google Search Console's "Search appearance: AI Overview" filter. Track CTR delta vs classic SERPs as a separate column.

The mistake most teams make is trying to measure everything. The point of measurement is a signal to reallocate effort, not coverage.

What I'd skip in the AI SEO chatter

Most "AI SEO" content circulating in 2026 is foundational repackaging. The market has been flooded with "what is AEO" and "what is GEO" pieces since 2024, and the content has converged on the same five tactics: schema, FAQs, answer-shaped content, brand mentions, freshness. The list is fine; the framing is mostly wrong.

Skip anything that calls AEO, GEO, or AISO "the new SEO" without specifics. They're sub-disciplines, not replacements; treating them as wholesale replacements for classic SEO is what produces the abandoned-the-shared-70% mistake.

Skip "Optimize for ChatGPT" guides that don't distinguish ChatGPT Search from the conversational mode. The two products behave differently in retrieval and citation; a guide that conflates them is teaching the wrong tactics.

Skip tactics that hinge on schema as a magic switch. A page with perfect schema and no topical authority loses citations to a page with sloppy schema and deep topical authority. Ship the schema; don't expect it to do the heavy work.

Skip anything advocating for "AI-only" content that abandons classic SEO. Abandoning the shared 70% for "answer-first" tactics costs more candidacy than it gains; the teams that have made this mistake are visible in the citation-share dashboards as flat or declining lines.

The practical posture is conservative on engine-specific tactics, aggressive on the shared 70%, and honest about the 30% where the gains are real but bounded.

FAQ

<summary>Do I have to allow all the AI bots, or can I pick?</summary>

You can pick. PerplexityBot, OAI-SearchBot (for ChatGPT Search), and GPTBot (for ChatGPT training) are separate user agents. Most publishers allow PerplexityBot and OAI-SearchBot for retrieval, and the choice on GPTBot depends on the publisher's stance on training data use. Standard Googlebot is a separate case; blocking it costs you classic Google traffic and AI Mode candidacy at once, which is rarely the right trade.

<summary>Should I rewrite existing articles into a Q&A format?</summary>

Not wholesale. For Perplexity specifically, converting the five to ten highest-priority pages to a Q&A heading skeleton is the single highest-impact change. For ChatGPT Search and Google AI Mode, the Q&A format helps less; depth and classic SEO matter more. Pick the pillar pages and convert those.

<summary>Is FAQPage schema enough to get Perplexity citations?</summary>

No. Schema is necessary, not sufficient. A page with FAQPage schema but no topical depth and no brand authority will lose citations to a page with deep topical content and no schema. Ship the schema because it's cheap and it removes friction. Don't expect it to be the lever that wins.

<summary>Does ChatGPT Search use Bing's index?</summary>

Partially. OpenAI has confirmed that ChatGPT Search blends its own retrieval (via OAI-SearchBot) with the Bing index as a fallback layer for queries where the native retrieval comes up short. Practically: get your important pages indexed in Bing via Bing Webmaster Tools. Submission and verification take 20 minutes per site and remove a real failure mode for ChatGPT Search candidacy.

<summary>How fast do these changes show up in citation share?</summary>

Variable, but the shape is consistent. PerplexityBot re-crawls quickly (often within a week of a structural change), and the citation share signal moves within two to four weeks. ChatGPT Search is slower because the brand-authority signal takes longer to update; expect six to twelve weeks for a real shift. Google AI Mode moves on Google's classic re-crawl cadence plus the extraction-model update, which is also in the six to twelve week range. Plan in months, not weeks.

<script type="application/ld+json"> { "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [ { "@type": "Question", "name": "Do I have to allow all the AI bots, or can I pick?", "acceptedAnswer": { "@type": "Answer", "text": "You can pick. PerplexityBot, OAI-SearchBot (for ChatGPT Search), and GPTBot (for ChatGPT training) are separate user agents. Most publishers allow PerplexityBot and OAI-SearchBot for retrieval, and the choice on GPTBot depends on the publisher's stance on training data use. Standard Googlebot is a separate case; blocking it costs classic Google traffic and AI Mode candidacy at once, which is rarely the right trade." } }, { "@type": "Question", "name": "Should I rewrite existing articles into a Q&A format?", "acceptedAnswer": { "@type": "Answer", "text": "Not wholesale. For Perplexity specifically, converting the five to ten highest-priority pages to a Q&A heading skeleton is the single highest-impact change. For ChatGPT Search and Google AI Mode, the Q&A format helps less; depth and classic SEO matter more. Pick the pillar pages and convert those." } }, { "@type": "Question", "name": "Is FAQPage schema enough to get Perplexity citations?", "acceptedAnswer": { "@type": "Answer", "text": "No. Schema is necessary, not sufficient. A page with FAQPage schema but no topical depth and no brand authority will lose citations to a page with deep topical content and no schema. Ship the schema because it's cheap and it removes friction. Don't expect it to be the lever that wins." } }, { "@type": "Question", "name": "Does ChatGPT Search use Bing's index?", "acceptedAnswer": { "@type": "Answer", "text": "Partially. OpenAI has confirmed that ChatGPT Search blends its own retrieval (via OAI-SearchBot) with the Bing index as a fallback layer for queries where the native retrieval comes up short. Practically: get your important pages indexed in Bing via Bing Webmaster Tools." } }, { "@type": "Question", "name": "How fast do these changes show up in citation share?", "acceptedAnswer": { "@type": "Answer", "text": "Variable, but the shape is consistent. PerplexityBot re-crawls quickly (often within a week of a structural change), and the citation share signal moves within two to four weeks. ChatGPT Search is slower because the brand-authority signal takes longer to update; expect six to twelve weeks for a real shift. Google AI Mode moves on Google's classic re-crawl cadence plus the extraction-model update, which is also in the six to twelve week range. Plan in months, not weeks." } } ] } </script>

Discussion (3 comments)

fullstack_ninja

fullstack_ninja

7 months, 2 weeks

Answers, not keywords.

backend_wizard

backend_wizard

7 months, 1 week

Agree founders shouldn't drown in meta tweaks, but you still need a reproducible pipeline: pull questions from GSC/PAAs, author concise answer blocks, add FAQ/Answer schema and optionally index canonical answers in a vector DB for fast RAG retrieval. Then run controlled experiments (traffic splits + significance testing) and monitor SERP feature share — beware scaling costs if you pre-render answers for millions of pages. Curious whether the article's recommendations were A/B tested against classic keyword pages and what the sample size was.

scalability_pro

scalability_pro

7 months, 1 week

Good breakdown of SGE/Perplexity — the shift to delivering clear answers is real. For measurement I'd run A/B tests with answer-first headings, track answer-rich impressions in GSC and automate Lighthouse CI checks so snippets aren't lost to client-side rendering.