Gemini Traffic Doubles, ChatGPT Ads Favour Clarity & GEO Tactics That Damage SEO

Search Digest Issue #007, week of 30 Mar 2026, AI referral traffic by platform with Gemini growth trajectory and ChatGPT ad anatomy breakdown

The SE Ranking data this week is the one to pay attention to. Across 101,574 websites with Google Analytics connected, Gemini referral traffic more than doubled between November 2025 and January 2026, a 115% increase in two months. The gap between ChatGPT and Gemini has compressed from 22x in October to 8x in January. Gemini already surpassed Perplexity as a referral source. If your AI traffic reporting only shows ChatGPT, you are missing a fast-growing channel that sits inside the same Google ecosystem you are already optimising for.

The ChatGPT ads finding is worth reading even if you are not running paid campaigns there. Adthena analysed over 40,000 daily ad placements and found that the format rewarding precision, not persuasion: 30-character headlines, 116-character body copy, brand name first, concrete numbers, free trial offers, calm CTAs. No storytelling, no exclamation marks. The creative logic for what performs in AI-native environments turns out to be almost the inverse of traditional display advertising.

The rest of the issue covers three pieces that connect to each other. Lily Ray's documented cases of GEO tactics destroying organic rankings, the argument that you need machine-readable content infrastructure beyond llms.txt, and SparkToro's point that AI systems cite evidence that already exists in the public record before any search ever happens. Together they sketch out what a serious GEO programme actually requires: clean organic foundations, structured machine-readable data, and distributed proof points across the platforms where your audience already is.

Yulia Deda / SE Ranking

Sites are now getting 2x more AI traffic from Gemini

SE Ranking tracked referral traffic from seven major AI platforms across 101,574 websites with Google Analytics over 12 months. The headline number is Gemini's growth: between November 2025 and January 2026, Gemini referral traffic increased 115%. In the same period, ChatGPT declined roughly 8% per month. The gap between them compressed from 22x in October 2025 to 8x by January 2026. By January, Gemini also surpassed Perplexity, sending 29% more traffic globally and 41% more in the US. ChatGPT still dominates at around 80% of all AI-sourced traffic, but the direction of travel is clear.

The Gemini 3 rollout in November and December 2025 appears to have been the catalyst. The study notes this directly: traffic growth coincided precisely with the product update. That pattern matters because it suggests Gemini traffic growth is tied to Google's product decisions, not just broader AI adoption. If Google continues improving Gemini and integrating it more deeply into Search, the growth trajectory could accelerate further. The conservative modelling in the study projects a potential overtake of ChatGPT as early as October 2026; the aggressive scenario puts it at June 2026 if current growth rates hold.

Key points

  • Gemini referral traffic grew 115% in two months (Nov 2025–Jan 2026) across 101,574 sites
  • Gap between ChatGPT and Gemini compressed from 22x (Oct 2025) to 8x (Jan 2026)
  • Gemini now sends 29% more global traffic than Perplexity, 41% more in the US
  • ChatGPT still holds ~80% of all AI referral traffic but declined ~8% per month in the same period
  • Combined AI referral traffic reached 0.24% of global internet traffic, up from 0.15% in 2025
  • Growth correlated with Gemini 3 rollout, suggesting product improvements drive traffic, not just AI adoption trends

Key takeaway

Add Gemini as a separate referral source segment in your analytics reporting now, not when it becomes dominant. The growth trajectory means it will be a meaningful channel before most teams have set up the measurement. If your site ranks well in Google organic, there is a reasonable chance Gemini is already sending traffic you are not tracking separately.

Also worth considering

The Gemini growth story is also a reminder that AI traffic is still tiny in absolute terms: 0.24% of global internet traffic across all platforms combined. The conversation about AI search replacing organic traffic is real, but the replacement is not happening quickly. The more immediate story is that Gemini is becoming a referral channel worth tracking and optimising for, alongside ChatGPT, not instead of organic.

What I'm testing

Creating a dedicated AI referral traffic segment in GA4 that breaks out Gemini, ChatGPT, Perplexity, and other LLMs separately, rather than grouping them all under a single AI source. Looking at which pages attract Gemini traffic specifically and whether they differ from ChatGPT referral pages in topic or format.

Read the full article

Search Engine Land / Adthena

ChatGPT ads favor clarity over creativity, new data shows

Adthena analysed over 40,000 daily ad placements on ChatGPT and found a consistent structural pattern in top-performing creatives. Headlines average 30 characters and around 5 words, almost always leading with the brand name. Body copy averages 116 characters and around 19 words, structured as two tight sentences: a proof point followed by an offer or conversion nudge. Concrete numbers, dollar signs, and specific value claims consistently outperform vague or emotional language. Free trial and demo offers are the dominant conversion mechanism. The tone is calm and measured, with exclamation marks essentially absent from high-performing ads.

The implication is that ChatGPT's ad environment rewards utility over storytelling. Users are in a task-completion mindset, not a browsing one, and the ads that perform reflect that. This is essentially the inverse of what works in traditional display or social advertising, where creative differentiation and emotional hooks tend to drive performance. The report frames it as "precision over persuasion" and that framing is useful because it connects to the same logic that drives AI citation behaviour more broadly: specific, clear, credible content wins over creative or stylised alternatives.

Key points

  • 40,000+ daily ChatGPT ad placements analysed by Adthena to identify top-performing creative patterns
  • Winning headlines: ~30 characters, ~5 words, brand name first for immediate recall
  • Winning body copy: ~116 characters, ~19 words, two sentences (proof point + offer or nudge)
  • Concrete numbers and dollar signs outperform vague value claims consistently
  • Free trials and demos are the most effective conversion lever in this environment
  • Calm tone without exclamation marks aligns with ChatGPT's helpful-assistant positioning rather than traditional ad pressure

Key takeaway

If you are testing ChatGPT ads, start with the simplest possible structure: brand name, one specific benefit with a number, free trial or demo offer, direct CTA. Resist the temptation to apply the creative approach from social or display. The environment is different enough that standard ad instincts will likely underperform.

Also worth considering

The precision-over-persuasion logic extends beyond paid ads. The same characteristics that make a ChatGPT ad perform, clarity, specificity, credible proof points, a concrete offer, also describe what makes content more likely to be cited in AI Overviews and AI Mode. If you are optimising for both paid and organic AI visibility, the content rules are converging rather than diverging.

What I'm testing

Running two ChatGPT ad variants against each other: one following the Adthena structure exactly (short, brand-led, specific number, free trial) and one using our standard display creative adapted to the character limits. Expecting a significant gap in favour of the structured variant.

Read the full article

Lily Ray / Substack

Your GEO Strategy Might Be Destroying Your SEO

Lily Ray documents five GEO tactics that are causing organic traffic collapses: scaled AI content production, artificial date refreshing, self-promotional listicles (ranking your own product at number one in category comparisons), prompt-injection "summarize with AI" buttons, and excessive comparison pages. In each case, she has traffic data showing the pattern: rapid early growth followed by a hard drop during Google algorithm updates, with some sites losing visibility permanently. One company had to remove the case study articles that had celebrated its growth after subsequent traffic crashes made them embarrassing.

The structural argument is the most important part of the piece. Most major AI search products, including ChatGPT, use retrieval-augmented generation (RAG) that pulls from external indexes before generating responses. Britney Muller's cited research puts it plainly: "Every single URL you see in LLM output comes from search engine API." Ray also presents data showing a correlation between organic ranking drops and ChatGPT citation declines among affected sites. If you lose organic visibility, you are likely losing AI citation visibility at the same time. GEO is not an alternative to SEO, it depends on it.

Key points

  • Five documented GEO tactics that damage organic rankings: scaled AI content, artificial date refreshing, self-promotional listicles, prompt-injection buttons, excessive comparison pages
  • Traffic data shows a consistent pattern: growth, then collapse during Google algorithm updates, particularly June 2025, January 2026, and February 2026 core updates
  • ChatGPT and most AI search products use RAG, sourcing from search engine APIs, organic ranking loss translates directly to AI citation loss
  • Correlation data shows organic ranking drops and ChatGPT citation declines occur together in affected sites
  • Microsoft formally documented prompt-injection "summarize with AI" buttons as an AI recommendation poisoning security threat, affecting 31 companies across 14 industries
  • Recovery from these tactics is slow, some sites show permanent visibility loss months after removing the offending content

Key takeaway

Treat your organic search health as the foundation of your GEO programme, not a separate track. Before adding GEO-specific tactics, audit whether any current content or technical approaches fall into Lily Ray's five categories. The short-term AI visibility gains from these tactics are real, but the organic risk is also real, and losing organic rankings means losing AI citations too.

Also worth considering

The prompt-injection finding crosses from algorithmic risk into legal and brand territory. Microsoft has now formally categorised hidden instructions in summarisation buttons as a security threat. That changes the risk profile of that tactic significantly: it is not just a Google spam policy violation, it is something that can be cited in a security report with your company named. The 31 companies identified in Microsoft's February 2026 report will have difficult conversations with legal and compliance teams if they have not had them already.

What I'm testing

Running this audit against Lily Ray's five categories on a few sites, specifically checking whether any comparison or listicle pages feature the brand's own products at the top, and whether any on-site AI summarisation tools use instructions that could be classified as prompt injection. Both are easier to find than most teams expect.

Read the full article

Duane Forrester / Search Engine Journal

Llms.txt Was Step One. Here's The Architecture That Comes Next

Forrester's argument is that llms.txt solves a narrow problem but creates a maintenance burden and cannot express the relationship structures that AI systems actually need. Every product update, pricing change, or new case study requires updating both the live site and the file, and the format provides no way to signal hierarchy, deprecation, or authority relationships between entities. The piece proposes a four-layer architecture as the next step: enhanced JSON-LD structured data on commercial pages, entity relationship mapping (JSON-LD extensions or headless CMS endpoints that express how products connect to categories and solutions), versioned content API endpoints for FAQs and documentation, and provenance metadata attaching timestamps, authorship, and version information to every exposed fact.

The data point that anchors the practical case: pages with valid structured data are 2.3x more likely to appear in Google AI Overviews. Forrester also flags Anthropic's Model Context Protocol as the emerging standard for content API endpoints, suggesting that the infrastructure you build for AI-readable content now will need to be compatible with MCP as it becomes more widely adopted. The minimum viable approach he recommends for this quarter is a JSON-LD audit of commercial pages, a single structured content endpoint for frequently compared information, and provenance metadata on public-facing facts.

Key points

  • llms.txt is a starting point, not a complete solution: it cannot express entity relationships, hierarchies, or deprecations, and requires manual updates with every content change
  • Pages with valid structured data are 2.3x more likely to appear in Google AI Overviews
  • Four-layer architecture: enhanced JSON-LD, entity relationship mapping, versioned content API endpoints, provenance metadata
  • Anthropic's Model Context Protocol is emerging as the standard for content API endpoints; building toward MCP compatibility now avoids a future migration
  • Provenance metadata (timestamps, authorship, version info) enables AI systems to verify and cite information with confidence
  • Minimum viable action this quarter: JSON-LD audit on commercial pages, one structured content endpoint, provenance metadata on key facts

Key takeaway

Run a JSON-LD audit on your commercial pages before anything else. Most sites have patchy or outdated structured data on the pages where citation probability matters most, product pages, service pages, comparison pages. That is the quickest win available in the four-layer framework and it pays off in both AI Overviews visibility and conventional rich results.

Also worth considering

The Model Context Protocol point is worth tracking even if you are not building content API endpoints yet. If MCP becomes the dominant standard for how AI agents access brand information, sites that have invested in machine-readable, versioned content infrastructure will have a significant advantage over those still relying on scraping and inference. The window for building that infrastructure before it becomes a competitive necessity is probably 12 to 18 months.

What I'm testing

Running a structured data audit on commercial pages, specifically checking completeness of Product, Service, and FAQ schema and whether entity relationships between products and categories are expressed anywhere in the markup. The answer is usually that only surface-level schema is present and the relationship layer is entirely missing.

Read the full article

Amanda Natividad / SparkToro

If Search Captures Demand, Public Evidence Creates It

Natividad's central argument: search engines capture existing demand, they do not create it. The implication is that the upstream work of building distributed public evidence, across forums, reviews, community discussions, and third-party publications, is what determines whether a brand is visible in AI answers before anyone ever opens a search bar. The Seer Interactive case study makes this concrete. One negative review theme appeared 67 times in AI outputs before Seer published counter-evidence. After publishing retention data publicly, LLMs immediately stopped citing the negative claim. The evidence that existed in the public record shaped the AI answer; new evidence changed it.

The search behaviour data reinforces the point. In Q4 2025, Google accounted for 73.7% of all searches across 41 major US websites, but Reddit simultaneously outranked SaaS vendors on 50 to 66% of shared keywords across three of four verticals analysed. For queries of six words or more, Reddit's advantage reached 73 to 100%. User-generated content on third-party platforms is winning organic rankings and, by extension, the AI citations that follow from those rankings. Brands that only publish on their own domains are competing at a structural disadvantage on both fronts.

Key points

  • Search captures demand; the "public record" of distributed evidence across the web is what creates demand before search happens
  • Seer Interactive found one negative review theme appearing 67 times in AI outputs; publishing retention data immediately shifted the AI narrative
  • Reddit outranks SaaS vendors on 50–66% of shared keywords across most verticals; advantage rises to 73–100% for six-word-plus queries
  • Direct traffic accounts for ~45% of SparkToro's own visits; organic search only ~24%, showing the breadth of the pre-search journey
  • AI citation systems pull from whatever evidence exists in the public record, including reviews, forums, and third-party publications alongside owned content
  • Publishing credible proof points on multiple platforms changes what AI systems cite about your brand, not just what your website says

Key takeaway

Map the evidence that exists about your brand on third-party platforms before you build more owned content. What does Reddit say? What are the review platforms citing? What shows up in AI answers about you? The gap between what you say about yourself and what the public record says about you is precisely what AI systems are likely to reflect, and it is addressable if you know where it is.

Also worth considering

The Seer case study is a useful proof point for internal conversations about GEO investment. It demonstrates that the content you publish changes what AI systems say about you in a measurable way and on a short timescale. That is a much more concrete ROI story than most GEO arguments, which tend to be directional rather than causal. If you are trying to make the case internally for taking AI visibility seriously, this data point is more persuasive than most.

What I'm testing

Prompting ChatGPT, Gemini, and Perplexity with questions about specific brands and comparing what AI systems say against what those brands say about themselves. Looking for specific negative claims or gaps that can be addressed by publishing counter-evidence, rather than trying to rewrite positioning from scratch.

Read the full article

That is issue #007. The Gemini traffic data is the most actionable finding this week: if you are not tracking Gemini referral separately in GA4, set that up before anything else. It is a growing channel and it sits in the same ecosystem as the organic rankings you are already working on. The Lily Ray piece and the SparkToro argument connect to the same underlying point, your AI visibility depends on the health of your organic foundations and the breadth of your public evidence, not just on GEO-specific tactics.

The ChatGPT ad data is worth filing even if you are not running campaigns there yet. The creative logic, short, specific, brand-led, calm, maps onto what works in AI citations too. And the llms.txt architecture piece is the one to read slowly if you are planning a GEO content infrastructure project this quarter. If any of this week's picks changes how you are thinking about your measurement setup or content programme, I would like to know what you are doing with it.

Free Consultation

Let's Talk


Tell me what you're working on. I'll give you an honest assessment and we'll explore if working together makes sense — no hard sell, just a free, no-obligation call to explore what's possible for your business.