Updated March 2026
TL;DR: I asked ChatGPT to recommend SEO tools. We weren't on the list. Seven weeks later, after running a focused experiment on our own site, we were. This is the full playbook — what worked, what didn't, and what each AI engine actually cares about. The short version: third-party brand mentions matter most, schema markup has roughly 3x impact on Google AI Overviews, and Perplexity rewards outbound citation density. But I'm getting ahead of myself.
Late 2025. Tuesday morning. I opened ChatGPT and typed: "What are the best tools for automated SEO?"
It listed six tools. Semrush, Ahrefs, Surfer SEO, Moz, SE Ranking, and one I'd never heard of. SEOJuice wasn't mentioned. Not in the main list, not as an alternative, not in a footnote.
I tried variations. "Best internal linking tools." "SEO automation for small businesses." "Yoast alternatives." Nothing. Across twenty-something prompts, we didn't exist.
This was a problem I hadn't anticipated. We'd been heads-down building the product — Lida and I had been building as a two-person team since early 2025, migrated from .io to .com in January 2026, and spent most of our time shipping features rather than promoting them. Traditional SEO was going well. But AI search was a blind spot.
And the numbers made it hard to ignore. ChatGPT was sending 243.8 million visits to websites per month. AI referral traffic had grown roughly 700% over the course of 2025, according to BrightEdge's 2025 analysis of AI-driven search traffic. Still a small slice of total web traffic — around 1% — but growing at 130-150% year-over-year.
The trajectory mattered more than the current number. Gartner had predicted traditional search volume would drop 25% by 2026 due to AI chatbots. Whether that exact number hits is debatable, but the direction is not. A growing slice of potential customers are finding answers through AI, not through ten blue links. And we weren't in those answers.
So I did what I usually do when I encounter a problem I don't understand. I read the research, ran experiments on our own site, and documented what happened. This article is everything I learned — the playbook, the data, and the failures that taught us the most.
The best academic work on this comes from a 2023 research paper out of Princeton University, Georgia Tech, the Allen Institute for AI, and IIT Delhi. Published at KDD 2024, the paper analyzed 10,000 queries and found that specific content optimization strategies — adding statistics, citations, and quotations — could increase AI citation frequency by up to 40%.
The Princeton paper is the best research we have, but it studied a controlled sample. Take the percentages directionally, not as gospel.
What the paper established as a starting point, real-world monitoring data has since expanded. After analyzing citation patterns through our own AISO monitoring and cross-referencing with large-scale studies from SE Ranking (SE Ranking's 2025 study of 129,000 domains) and Profound (Profound's 2025 analysis of AI citation patterns, covering 216,000 pages), five factors emerged consistently.
| Factor | What It Means | Key Data Point |
|---|---|---|
| Third-party brand mentions | Other sites talking about you matters more than anything on your own domain | Brands 6.5x more likely to be cited via third-party sources than own domain (SE Ranking, 2025) |
| Domain authority and trust | Referring domains, review profiles, and cross-platform presence | Sites with 32K+ referring domains are 3.5x more likely to be cited; review platform profiles add 3x (SE Ranking, 2025) |
| Content structure and citability | Data-rich, quotable passages that AI can extract without surrounding context | Up to 40% visibility boost from statistics and citations (Princeton GEO paper, KDD 2024) |
| Page speed | Fast pages get fetched and parsed; slow pages get skipped during real-time retrieval | FCP under 0.4s averages 6.7 citations vs. 2.1 for pages over 1.13s (SE Ranking's 2025 study of 129,000 domains) |
| Content freshness | Stale content loses citation priority, sometimes within weeks | New content enters AI citation pools within 3-5 business days on Perplexity |
If I had to rank these by impact: third-party mentions first, by a wide margin. Then authority and trust signals. Then content structure. Then speed and freshness. Nail the first two and the rest is optimization on top of a strong foundation.
(Side note: that ranking is based on correlation data, not controlled experiments. Nobody has published a clean causal study on this yet. The Princeton paper comes closest but didn't test brand mention effects.)
Here's what we actually did. Not a theoretical framework — the specific sequence of work over roughly seven weeks that took us from invisible to cited.
I say "roughly" because some weeks overlapped and some tasks took longer than planned. Real experiments are messy.
I ran 30 queries relevant to our product category across ChatGPT, Perplexity, Google AI Overviews, and Claude. Documented which brands got mentioned for each query. Documented who got cited instead of us. The pattern was clear: we showed up in zero comparison queries ("best X vs Y") but occasionally appeared in narrow technical queries about internal linking automation. Our brand footprint was too thin for the AI to consider us a real contender in the broader SEO tools category.
This was the highest-impact phase and the least glamorous. We claimed and optimized profiles on G2, Capterra, and Product Hunt. I started contributing genuinely useful answers on Reddit — not "use SEOJuice" comments, but actual technical explanations about internal linking mechanics and on-page optimization. When someone asked how automated linking works, I'd explain the trade-offs honestly and let the product come up naturally if relevant.
We reached out to authors of comparison articles and roundups with real data from our platform. We published original benchmarks from our monitoring data on a public data page and made it explicitly quotable. Original data that others cite is the best brand mention generator I've found.
We also asked our best customers to share their experience on G2 and Capterra. Authentic reviews, not scripted ones. Platforms penalize fake patterns, and AI models seem to discount suspicious review clusters. Even a small number of genuine reviews changed our visibility — domains with review platform profiles have 3x higher citation chances according to the SE Ranking study.
Reddit deserves special emphasis. It appeared in 38% of Perplexity's responses according to Profound's 2025 analysis of AI citation patterns, rising to 52% for product recommendation queries. For Perplexity specifically, authentic Reddit presence isn't optional — it's a primary source. But authenticity is key. Reddit's community detects promotional accounts instantly, and getting flagged as a spammer hurts your brand more than no presence at all.
We rewrote our top 15 pages with what I started calling "quotable blocks" — self-contained passages of 50-150 words that directly answer a specific question with data. The format: clear heading as question, direct answer in the first sentence, supporting evidence, source attribution.
Three specific changes that mattered:
This is where the schema markup finding came from. We A/B tested pages with and without FAQ and HowTo schema across our client sites. Pages with schema were roughly 3x more likely to appear in Google AI Overviews than matched pages without it.
I'm confident about schema markup's impact on Google AI Overviews. I'm much less confident about its impact on ChatGPT. We couldn't find a consistent pattern for what ChatGPT chose to cite — recency and domain authority seemed to matter most, but our sample was too small to be confident.
Beyond schema, we increased entity density across our pages. Content with specific people, companies, tools, studies, and locations gets cited more than generic content. Pages with 15+ recognized entities show 4.8x higher selection probability in Google AI Overviews. Instead of "researchers found that content optimization helps," we wrote "the Princeton GEO paper (Aggarwal et al., KDD 2024) found that adding statistics increased citation rates by up to 40%." More entities, more specificity, more citations.
We also optimized page speed. FCP under 0.4 seconds correlated with significantly more citations from real-time retrieval engines (Perplexity, ChatGPT with browsing). Though I should note: in our Google AI Overview tests specifically, page speed didn't show a measurable effect. It may run on a different pipeline than real-time retrieval.
Within the seventh week, we started seeing mentions. Perplexity cited our data page and two blog posts. A ChatGPT browsing-mode query about internal linking tools included us. Google AI Overviews referenced one of our schema-optimized guides.
Small numbers. But after weeks of zero, any number felt significant.
Six months later, the picture was clearer: consistent Perplexity citations, growing ChatGPT presence in our category, and a noticeable flywheel effect where AI mentions drove brand searches which drove more third-party mentions which drove more AI citations.
(We monitor this weekly. Rankings shift constantly. What worked in January may not work in June. That flywheel can also spin backward if you stop feeding it.)
One of the biggest mistakes in GEO is treating all AI engines the same. They have different retrieval mechanisms, different source preferences, and different citation styles. Here's what we've observed.
| Dimension | ChatGPT / OpenAI | Perplexity | Google AI Overviews | Claude / Gemini |
|---|---|---|---|---|
| Data source | Web browsing + training data | Real-time web search | Google index + Knowledge Graph | Training data + web search (when enabled) |
| Top priority | Entity recognition, authority depth | Citation density, freshness, data specificity | E-E-A-T signals, schema markup | Primary sources, precision |
| Citation style | Inline links (browsing mode) | Numbered footnote citations | Expandable source cards | Inline citations (varies by mode) |
| Sources per response | 3-4 (down from 6-7 pre-October 2025) | 5-8 | 5-6 | Varies |
| Freshness weight | Medium | Very high | High | Medium |
| Reddit influence | Growing | Very high (38-52% of responses) | Present via Google index | Minimal |
ChatGPT drives 87.4% of all AI referral traffic according to Similarweb's 2025 AI search traffic report, which makes it the priority for most businesses. It relies heavily on entity recognition — whether your brand is a well-established "thing" in its training data and across the web. Sites mentioned across 8+ independent domains are cited 2.1x more than brands mentioned on only 2 high-authority sites. Breadth of presence matters more than depth on any single source.
The October 2025 update reduced brand mentions per response from 6-7 to 3-4, making each mention slot more competitive. Metehan Yesilyurt's research, published in his 2025 analysis on LinkedIn, showed that ChatGPT uses a URL freshness scoring system — updating content improved one page's position by 95 places in citation priority. Keep your key pages current.
When ChatGPT uses browsing mode, it fetches your pages in real-time. When browsing isn't triggered, it relies on training data. Both paths matter: optimize your site for real-time retrieval (speed, clean HTML), and build enough brand presence to be included in training data through sheer repetition across independent sources.
The hardest thing about ChatGPT: it's the least predictable. Some of our highest-quality pages got ignored. Some pages we didn't expect to perform got cited. The signal-to-noise ratio in our ChatGPT data was the worst of any engine. If someone tells you they've cracked the ChatGPT citation algorithm, they're overfitting to a small sample.
Perplexity is the most transparent AI search engine. It shows its sources, it fetches in real-time, and its citation behavior is relatively consistent. Our experiments showed three clear patterns:
First, outbound citation density matters. Pages linking to 5+ authoritative sources were cited more than equally strong content with fewer references. Perplexity seems to treat your citation behavior as a trust proxy.
Second, data specificity wins. "LCP under 2.5s improves Core Web Vitals" gets cited. "Speed matters for SEO" doesn't. Exact numbers, named studies, and specific comparisons outperform general advice every time.
Third, longer well-structured content outperforms shorter content. My hypothesis: a 3,000-word guide with 15 well-labeled sections gives Perplexity 15 potential extraction points. A 500-word post gives it one or two. Perplexity processes 10 million queries per day. It needs a deep well of extractable content. Give it that well.
One more thing about Perplexity: it's the most measurable platform because it shows numbered citations. You can see exactly which pages get cited and for which queries. This makes it the best testing ground for GEO experiments. If something works on Perplexity, it likely transfers. If it doesn't work there, you have immediate feedback.
Schema markup was the single biggest factor in our Google AI Overview tests. Pages with FAQ and HowTo schema were roughly 3x more likely to be cited. Pages combining text, images, and structured data saw 156% higher selection rates. This is the one area where I'd say the data is strong enough to call it a clear recommendation.
E-E-A-T signals matter here more than on other platforms. Author bylines with credentials, outbound links to primary sources, clearly demonstrated first-hand experience. Google's quality rater guidelines filter into AI Overview source selection.
One nuance: the overlap between Google's top organic results and AI Overview sources has dropped from 70% to below 20%. Being #1 on Google doesn't guarantee you'll appear in its AI Overviews. They're increasingly different ranking systems. Pages combining multi-modal content — text plus images plus structured data — had the highest selection rates. Google AI Overviews favors content that looks like what Google's quality raters would consider authoritative. If you understand zero-click search optimization, you understand the foundation here.
Claude and Gemini are newer to this game. I genuinely don't know what they prioritize yet. Our monitoring data shows Claude tends to favor primary sources and well-sourced technical content. Gemini leans on Google's index similarly to AI Overviews but with different selection criteria that we haven't decoded.
I've seen claims about optimizing specifically for Claude or Gemini. I'd be skeptical. The sample sizes are too small, the engines update too frequently, and anyone claiming certainty about their citation algorithms in early 2026 is extrapolating beyond the data.
The safe bet: the fundamentals that work for ChatGPT and Perplexity — authoritative content, data density, clean structure, strong brand presence — transfer reasonably well to Claude and Gemini. Optimize broadly.
This section matters. Every other GEO guide is a highlight reel of successes. Here's what we tried that had zero impact — or may have actively hurt us.
Format without substance was a complete failure. We restructured three blog posts into perfect FAQ format — clean headings, concise first-paragraph answers, proper schema — without actually improving the underlying content quality. None of the three got cited by any AI engine. The models aren't scanning structure. They're reading content. Perfect formatting around mediocre answers is still mediocre.
Creating an llms.txt file did nothing. As of March 2026, ChatGPT, Perplexity, and Claude don't read llms.txt files. It's a proposal, not a standard. We spent half a day creating one. Total impact: zero. Focus on making your actual content excellent instead.
Aggressive "direct answer" formatting without context hurt readability. We tried stripping three posts down to ultra-short definition-style paragraphs, thinking AI engines would prefer the density. Instead, the posts lost their depth — and neither humans nor AI engines engaged with them more. Perplexity actually cited the original longer versions from the Wayback Machine in one case. The AI wanted substance, not bullet points.
Self-promotional content got ignored consistently. Any page where the primary purpose was selling rather than informing was invisible to AI engines. Our features page, our pricing page, our "why SEOJuice" page — none got cited for anything. AI engines cite informational content. They skip marketing pages. Every time.
Publishing volume without quality didn't help. We tested publishing more frequent, shorter posts versus fewer, deeper ones. The deeper posts won overwhelmingly. One comprehensive 3,000-word guide got more AI citations than eight 500-word posts covering the same ground from different angles. AI engines prefer depth over breadth, which aligns with the Princeton paper's findings on content quality signals.
Astroturfing on Reddit backfired. I want to be clear: we didn't do this. But I watched a competitor try it. They created accounts that were obviously promotional, posting "I just discovered [their tool] and it's amazing" across multiple subreddits. The accounts got flagged within days. The community reaction was hostile. And their brand was worse off than before — now Reddit threads about them led with "this company astroturfs." AI engines that crawl Reddit pick up that sentiment too.
Optimizing for ChatGPT specifically was a waste. We spent a week trying to reverse-engineer what ChatGPT specifically favored, separate from the general fundamentals. We adjusted entity density, experimented with answer formatting, even tried matching the style of pages that were getting cited. None of it produced a measurable difference beyond what the general best practices already achieved. The engine is too opaque and too variable for engine-specific optimization to be reliable. I'd recommend investing that time in brand building instead.
These failures were as instructive as the successes. (Our sample for most of these tests was small — maybe 15-20 pages total. Enough to see patterns, not enough to be statistically rigorous.) They narrowed our focus to what actually moves the needle: genuine expertise, real data, strong external brand presence, and patience. There's no shortcut. And if something sounds like a shortcut — a hack, a secret format, a magic file the AI reads — it almost certainly isn't one.
The biggest mistake I see is companies doing GEO work without measuring whether it's working. You wouldn't do SEO without rank tracking. Don't do GEO blind.
The manual approach (start here): Open ChatGPT and Perplexity once a week. Run your 20-30 target queries. Document which brands get mentioned, whether you appear, and in what position. Put it in a spreadsheet. It takes 30-45 minutes and gives you a clear picture of your trajectory. This is how I started.
What to track: Brand mentions by query (expanding or contracting?), competitor mentions (who gets cited instead of you?), mention position (first mention is worth far more than fifth), sentiment (positive recommendation vs. neutral mention vs. unfavorable comparison), and which of your pages get cited (tells you what content is working).
I should be upfront: our AISO monitoring feature tracks exactly this, so I'm not a neutral observer here. It runs target queries against ChatGPT, Perplexity, and Google AI Overviews on a schedule and tracks mentions, sentiment, and citation sources over time. It's what we built because we needed it ourselves, and it replaced the spreadsheet approach once our query list got past 50 prompts.
But the spreadsheet works. If you're just starting out and don't want to commit to a tool, the manual approach gives you 80% of the insight at zero cost. The tool matters less than the habit of checking regularly.
One thing to watch for: AI citation rankings shift more frequently than Google rankings. We've seen pages appear in ChatGPT results one week and disappear the next, only to return two weeks later. Don't panic over weekly fluctuations. Look at the 30-day and 90-day trend. That's where the signal lives.
GEO has attracted enough attention to generate its own mythology. Here's what I keep seeing repeated that doesn't hold up in 2026.
| Myth | Reality |
|---|---|
| "If I rank #1 on Google, AI will cite me" | The overlap between Google top results and AI-cited sources has dropped below 20%. Based on our analysis of pages tracked in SEOJuice's AISO monitoring, there's a 0.65 correlation between Google page-one rankings and ChatGPT mentions, but correlation isn't causation, and it's far from guaranteed. |
| "More content = more citations" | The opposite is closer to true. One comprehensive, data-rich article outperforms ten thin pages. AI engines reward depth and consolidation. |
| "Keyword stuffing works for AI like it used to for Google" | AI models use semantic understanding, not keyword matching. Stuffing reduces readability and citability. Write naturally. |
| "GEO replaces SEO" | Traditional search still accounts for ~96% of web traffic. That number is declining but slowly. Abandon SEO for GEO and you're optimizing for 4% at the expense of 96%. Do both. |
| "You can pay to get mentioned in ChatGPT" | As of March 2026, there's no paid placement in ChatGPT's organic responses. OpenAI has explored ad models, but brand mentions in conversational answers are earned, not bought. |
I'd add one more that's more subtle: the myth that GEO is a one-time optimization. It isn't. AI models update, citation patterns shift, and competitors are doing the same work. We update our top pages quarterly with fresh data and current references. Content freshness is a real signal, and "SEO trends 2024" gets deprioritized by 2026 even if the advice is still valid.
It depends on your starting point. If you have existing domain authority and some third-party brand presence, schema markup changes showed impact in 4-6 weeks and content restructuring in 8-12 weeks in our tests. Starting from near-zero brand presence — which is where most startups and small businesses are — takes 3-6 months of consistent work. Perplexity picks up changes fastest (3-5 business days for new content). ChatGPT was the most unpredictable with no clear timeline. The flywheel effect means it accelerates: the first citation is the hardest, and subsequent ones come faster as your brand presence compounds.
In our experience, no — it actually helps. The activities that boost AI visibility — earning reviews, building community presence, creating data-rich content, implementing schema markup — are also strong traditional SEO signals. Based on our analysis of pages tracked in SEOJuice's AISO monitoring, there's a 0.65 correlation between ranking on Google's page one and being mentioned in ChatGPT. In our experiments, every page that earned an AI citation was already ranking on page 1-2 for its target keyword. The one exception: if you strip pages down to ultra-short "direct answer" format (which we tried and it failed), you might hurt both human readability and traditional rankings. Don't sacrifice depth for density.
For Google AI Overviews, it was the single most impactful change in our tests — roughly 3x citation advantage. For Perplexity and ChatGPT, the effect was less clear. But implementing schema is low-effort (30 minutes for 10 pages using a plugin or tool) with no downside. It's one of those "always do this" recommendations. Our entity SEO guide covers the connection between structured data and answer engine optimization.
GEO (Generative Engine Optimization) specifically targets AI engines that generate original answers — ChatGPT, Perplexity, Claude. AEO (Answer Engine Optimization) is broader, including featured snippets, voice assistants, and any platform giving direct answers. "AI SEO" is an informal umbrella term. In practice, there's significant overlap. The tactics in this guide work across all three categories.
Optimize broadly first, then engine-specific. The fundamentals — data-rich content, strong brand presence, clean structure, schema markup — work across every platform. After those are solid, invest in engine-specific tactics: outbound citation density for Perplexity, entity depth for ChatGPT, E-E-A-T signals for Google AI Overviews. ChatGPT drives 87.4% of AI referral traffic (per Similarweb), so it gets priority if you have to choose. But the fundamentals are 80% of the work.
Related reading: Ask Engine Optimization: The Next Big Thing? • Optimizing for Zero-Click Searches • Entity SEO Explained • Free AI Visibility Checker Tool
The purpose of this article was to share our actual playbook, including what failed. AI search is moving fast — what I've written here reflects what we know as of March 2026. Some of it will age well. Some won't. We'll update this when the landscape shifts, which it will.
If you want to track whether AI engines are citing your business, you can start with the manual weekly audit I described above, or use our AI Visibility Checker for a snapshot. For ongoing monitoring, SEOJuice's AISO monitoring tracks mentions across ChatGPT, Perplexity, and Google AI Overviews automatically.
Good point about models relying on training corpora rather than real‑time crawling — but I’d caution against treating AI mentions as a primary KPI. In my 12 years leading enterprise SEO we prioritized Knowledge Panel claims, robust schema and authoritative publisher partnerships (saw ~22% lift in branded assistant references); happy to connect to share the playbook.
no credit card required