AI Clicks & Citation Gaps

Search Digest Issue #010, week of 04 May 2026, field experiment showing AI Overviews cut clicks 38% and brand citation ownership charts from 57 million AI citation study

The lead story this week is the one publishers and SEOs have been waiting for: actual experimental evidence of what AI Overviews do to organic clicks. A randomised field experiment with 1,065 US users found a 38% click reduction on queries that triggered AI Overviews, with zero measurable improvement in user satisfaction. That is not a correlation study or a traffic estimate. It is a controlled experiment with randomised assignment, pre-registered methodology, and a clear result. The click reduction from AI Overviews is real, structural, and not offset by any user benefit.

The Foundation Marketing and AirOps study of 57 million AI citations is the second story that deserves close attention. Brands own 10% of citations in AI responses overall, and that drops to 2.2% during unbranded discovery queries, the point when buyers are building shortlists and deciding who to consider. Reddit accounts for nearly 21% of all external citations in those queries. The implication is uncomfortable: the platforms that AI systems trust most at the discovery stage are the ones most brands treat as secondary channels.

The three remaining picks are all practical. Semrush's traffic channel study gives the macro picture: AI traffic is growing fast but organic search is declining in most industries, and the two trends are not cancelling each other out. The ChatGPT content study finds that commercial prompts trigger additional retrieval 25 times more often than informational ones, which changes the content priority case for brands building AI citation strategies. And SE Ranking's prompt tracking framework gives a methodology for measuring all of this that accounts for the instability of AI answer sets.

Search Engine Journal

Google's AI Overviews Cut Clicks Without Satisfaction Gain: Report

A randomised field experiment involving 1,065 US desktop Chrome users found that Google's AI Overviews reduce organic clicks by 38% on queries where they appear. Zero-click searches rose from 54% to 72% when AI Overviews were present. Researchers recruited participants via Prolific and deployed a Chrome extension that randomly assigned them to one of three groups: standard Google Search, AI Overviews hidden, or all searches redirected to AI Mode. The experiment ran for two weeks per participant between January and February 2026 and was pre-registered with the AEA RCT Registry, making it the first randomised field experiment on AI Overview impact run under real browsing conditions.

The finding that closes off the user-benefit argument: self-reported satisfaction, perceived quality, and ease of finding information were nearly identical whether AI Overviews were present or removed. The study authors concluded that AI Overviews "divert traffic away from publishers without delivering measurable improvements in user experience." The effect was strongest when AI Overviews appeared at the top of the page, which happened in 85% of cases. For publishers and SEOs, this is the clearest evidence yet that the click reduction from AI Overviews is structural, not a temporary side effect of a new feature finding its footing.

Key points

  • AI Overviews reduce organic clicks by 38% on triggered queries, based on a randomised field experiment
  • Zero-click searches rose from 54% to 72% when AI Overviews appeared
  • 1,065 US desktop Chrome users, randomised into three groups, January to February 2026
  • User satisfaction, perceived quality, and ease of finding information were nearly identical with and without AI Overviews
  • AI Overviews appeared at the top of the page in 85% of cases, where the click reduction effect was strongest
  • First pre-registered randomised field experiment measuring AI Overview impact in real browsing conditions

Key takeaway

The 38% click reduction is now backed by experimental evidence, not correlation. If a meaningful share of your organic traffic comes from queries that trigger AI Overviews, plan for that decline as a structural feature rather than a phase. The most actionable response is to identify which of your pages rank on queries that consistently trigger AI Overviews, and assess whether your content and conversion strategy reflects the reduced click opportunity on those queries.

Also worth considering

The combination of 38% fewer clicks and no measurable user benefit is the data point that regulators and publishers will cite. It removes the argument that AI Overviews serve users better at the expense of publishers. They reduce publisher traffic and produce no improvement in user experience. That is a different kind of finding, and it changes the political and legal context around how AI Overviews are framed.

What I'm testing

Segmenting top organic pages against AI Overview trigger data to estimate what proportion of that traffic is structurally at risk. The goal is to build a traffic model that treats AI Overviews as a stable feature, not an experiment, and to identify which content categories have the highest AI Overview trigger rates so I know where the click exposure is concentrated.

Read the full article

Semrush

We Analyzed Billions of Web Visits: How AI Is Reshaping Traffic Channels

Semrush analysed web visit data across more than 50,000 websites and 17 industries throughout 2025. The AI traffic headline is deceptively impressive: AI-driven visits grew 66% over the year, from 462 million to 767 million monthly visits. But the context matters. AI still accounts for just 0.14% of total web traffic. Google AI Mode grew from 1,600 to 38.2 million monthly visits over the period but represents only 0.01% of all traffic. The absolute numbers are growing fast; the relative share is still tiny relative to organic search, direct, and social.

The more significant finding is on the organic side. Organic search declined in 13 of the 17 industries analysed, with growth limited to visually driven sectors, beauty, apparel, food, and retail. Across all industries, organic search maintained a 16% share of total web traffic but that share is under sustained pressure. The headline AI growth rate and the organic decline are happening at the same time, and the AI gains are not yet large enough to compensate for the organic losses in most verticals. Retail saw the largest AI traffic increase at 343%, but even there the absolute volumes remain a fraction of organic.

Key points

  • AI-driven web traffic grew 66% in 2025, from 462 million to 767 million monthly visits, across 50,000-plus websites
  • AI represents just 0.14% of total web traffic despite the headline growth rate
  • Google AI Mode grew from 1,600 to 38.2 million monthly visits but is still only 0.01% of total traffic
  • Organic search declined in 13 of 17 industries, with growth only in beauty, apparel, food, and retail
  • Retail (+343%), apparel (+319%), and food (+253%) saw the largest AI traffic increases by industry
  • Organic search holds 16% of total web traffic but faces structural pressure across most verticals

Key takeaway

AI traffic is growing faster than any other channel but starting from a low enough base that it does not compensate for organic search decline in most verticals. The practical response is to treat AI traffic as an emerging supplement, not a replacement, and to focus diversification efforts on direct traffic, email, and social as well as AI visibility. Industries seeing meaningful AI traffic gains, retail, apparel, food, should be investing in structured data and product schema now, while the channel is still early.

Also worth considering

The percentage growth figure for AI traffic is easy to misread as a compensation for organic decline. The maths tells a different story. If AI represents 0.14% of traffic and grows 66%, you gain roughly 0.09 percentage points. If organic holds 16% and falls 2%, you lose 0.32 percentage points. Reporting AI traffic growth rates without the absolute volume context overstates what the channel is contributing relative to what organic is losing.

What I'm testing

Looking at channel mix data across different content types to see whether the organic decline pattern in this study shows up at the same rate across categories. The Semrush industry breakdown suggests the answer varies significantly by vertical, and understanding that at the page or topic level is more actionable than looking at blended site-wide traffic figures.

Read the full article

Foundation Marketing / AirOps

We Analyzed 57 Million AI Citations. Brands Owned 10% of Them.

Foundation Marketing and AirOps tracked 5.1 million AI responses and 57.2 million individual citations across ChatGPT, Gemini, Perplexity, Google AI Mode, and Google AI Overviews over 60 days from December 2025 to February 2026. The sample covered 50 brands across seven B2B verticals using 100 prompts per brand, split 65% unbranded discovery and 35% branded validation. The headline finding: brands own just 10% of citations in AI responses about them. The other 90% point to third-party sources: Reddit, YouTube, review sites, comparison platforms, and industry forums.

The gap becomes more pronounced at the discovery stage. When looking only at category-level prompts, the kind buyers use when building a shortlist before they know which brand they want, brand-owned domains account for just 2.2% of citations. Third-party sources took 97.8%. Reddit alone made up 20.8% of all external citations overall and 30.9% in unbranded discovery queries. The vertical breakdown adds texture: fintech brands were most cited via comparison sites, DevOps brands via developer platforms, and healthcare brands via medical authorities. Each sector has a different citation ecosystem, and a GEO strategy that ignores the specific platforms AI relies on for a given vertical is working with the wrong map.

Key points

  • Brands own just 10% of AI citations across 57.2 million citations and 5.1 million AI responses
  • At unbranded discovery queries, brand-owned citations fall to 2.2%; third-party sources account for 97.8%
  • Reddit accounts for 20.8% of all external citations and 30.9% in unbranded discovery queries
  • Study covered 50 brands, 7 B2B verticals, 5 AI platforms over 60 days (December 2025 to February 2026)
  • Citation ecosystems vary by vertical: fintech relies on comparison sites, DevOps on developer platforms, healthcare on medical authorities
  • Branded queries show a different picture: 77.6% of branded AI citations include the brand's own domain

Key takeaway

If AI systems discover brands through third-party sources 90% of the time, the practical work is not just optimising your own site but building a presence on the platforms AI systems actually cite for your vertical. For most B2B brands, that means prioritising Reddit contributions, G2 and Capterra reviews, developer documentation on GitHub, and editorial coverage in industry publications, before worrying about schema markup on the homepage. A citation source audit, understanding which third-party platforms AI relies on for your category, is the analysis that should come first.

Also worth considering

The 2.2% brand-owned citation rate at discovery is a useful number for setting GEO expectations with stakeholders. If a prospect is in the shortlist-building phase and AI is generating their options, your owned content is almost never part of what gets cited. That session is won or lost on your Reddit presence, your review site scores, and your editorial mentions. That changes where the investment needs to go.

What I'm testing

Auditing third-party citation sources for specific categories to find which platforms AI systems actually cite in those spaces, then comparing my presence on those platforms against what competitors have built there. The question is: which platforms am I underrepresented on relative to the competition, and which of those does AI rely on most for this vertical? That gap analysis is more actionable than generic advice to "be on Reddit."

Read the full article

Ben Tannenbaum / Search Engine Land

What Blog Posts Should You Write to Be Mentioned in ChatGPT?

Ben Tannenbaum, CEO of Aiso, tested 90 prompts across the beauty, legaltech, and IT sectors to understand what type of content ChatGPT retrieves when answering commercial questions. The headline finding: commercial prompts triggered ChatGPT to generate additional downstream queries, known as fan-out, in 78.3% of cases. Informational prompts did so just 3.1% of the time. Of the 20 fan-out triggers recorded across the study, 18 were commercial in nature and 2 were informational. The fan-out behaviour matters because it determines what additional content ChatGPT searches for to complete an answer, and that content becomes part of the citation set.

The practical implication is that content designed to rank on informational keywords is poorly positioned for ChatGPT citations on commercial questions. What performs better, based on the study's directional findings, is content that answers the questions a buyer asks at the point of comparison: best-of lists, vendor comparison guides, feature-driven category explainers, evaluation FAQs, and alternative product pages. These formats sit at the intersection of information and decision, which is where ChatGPT fan-out most often lands. The study acknowledges its sample is small and directional rather than definitive, but the commercial-prompts-trigger-commercial-fan-out pattern is consistent across all three sectors tested.

Key points

  • Commercial prompts triggered ChatGPT fan-out in 78.3% of cases; informational prompts triggered it 3.1% of the time
  • 18 of 20 fan-out triggers were commercial queries; 2 were informational
  • Content formats that perform best: best-of lists, vendor comparison guides, feature explainers, evaluation FAQs, alternative product pages
  • Study covered 90 prompts across beauty, legaltech, and IT verticals
  • Fan-out determines what supplementary content ChatGPT retrieves to complete commercial answers
  • Findings are directional: based on observed ChatGPT behaviour rather than architectural proof of retrieval weighting

Key takeaway

If you are trying to appear in ChatGPT responses to commercial queries, the content to prioritise is not more top-of-funnel educational posts but decision-support formats: comparison pages, shortlist articles, feature guides, and evaluation content. These are the formats ChatGPT reaches for when a user is asking what to buy or which vendor to choose. The content strategy shift is from "what is X" to "what is the best X for Y situation."

Also worth considering

The 78% versus 3% gap between commercial and informational fan-out is consistent with what the Foundation 57M citation study found: AI systems are most active at the discovery and comparison stages, not the awareness stage. Educational content still has value for organic search and for building topical authority, but the AI citation opportunity is concentrated in formats that serve buyers at the point of choice. Both studies point to the same content investment.

What I'm testing

Auditing existing content to identify which pages are structured as decision-support formats, comparison guides, feature breakdowns, and shortlist articles, versus pure educational posts. The question is whether the decision-support pages in a content mix attract more AI citation traffic than awareness-stage content, and whether that difference is large enough to justify shifting the production mix toward more commercial formats.

Read the full article

Yevheniia Khromova / SE Ranking

How to Find and Choose the Right Prompts to Track for AI Search Visibility

SE Ranking's Yevheniia Khromova sets out a five-step framework for deciding which prompts to track when monitoring AI search visibility. The core challenge the article addresses is that AI answer sets are unstable: only 35% of domains repeat consistently in AI responses, and two-thirds of domains that appear in one run are gone from the next. That makes prompt selection the most important decision in an AI tracking setup, because tracking the wrong prompts produces noise rather than signal. The five steps are: identify your prompt categories (informational, comparative, instructional, brand-specific, transactional), map them to buyer journey stages, source candidate prompts using seven methods, filter by competitive relevance and business intent, then set tracking volume and frequency.

The sourcing methods are the most practical section of the guide. Khromova recommends converting existing SEO keywords into conversational form, mining Google People Also Ask, asking LLMs directly what users in your category search for, analysing Reddit and forum threads for natural language questions, reviewing paid search data, reverse-engineering competitor websites, and using AI tracking tool suggestions as a starting point. The recommended starting range is 10 to 20 awareness prompts, 20 to 30 consideration prompts, and 5 to 10 brand evaluation prompts, tracked across two to three AI models for at least 30 days. The article's central framing is worth carrying forward: prompt tracking is a brand intelligence tool, not a performance marketing metric. The volume data is directional; the patterns over time are what matter.

Key points

  • Only 35% of domains repeat consistently in AI answers; two-thirds disappear between tracking runs
  • Five prompt categories to cover: informational, comparative, instructional, brand-specific, transactional
  • Seven sourcing methods: keyword conversion, People Also Ask, LLM query suggestions, Reddit and forums, paid search data, competitor reverse-engineering, tool recommendations
  • Starting range: 10-20 awareness prompts, 20-30 consideration prompts, 5-10 brand evaluation prompts
  • Track across two to three AI models for at least 30 days for reliable signal
  • Frame prompt tracking as brand intelligence, not a performance metric measured by citation volume

Key takeaway

Start your prompt tracking set with the consideration and comparison categories, not awareness. These prompts produce the most stable and commercially relevant citation data, and they align with where AI systems do the most active retrieval work. The 30-day minimum is also worth taking seriously: short tracking windows produce misleading data because AI answer sets fluctuate significantly from week to week.

Also worth considering

The 35% domain repeat rate is a useful instability benchmark. If two-thirds of domains that appear in an AI answer today are gone from the same query next week, treating a single citation as evidence of sustained visibility overstates what it represents. Track citation trends across months, not weeks, and treat repeated appearances rather than single appearances as the reliable signal worth reporting on.

What I'm testing

Setting up a structured prompt tracking test across three AI models for a specific product category to compare how consistent citations are across models and over time. The goal is to identify which prompts produce stable citation sets, since those are the queries where appearing consistently is achievable, and which produce such high variance that tracking them adds little signal to a visibility report.

Read the full article

That is issue #010. The AI Overviews field experiment is the finding I keep coming back to. A 38% click reduction with no user benefit is not a grey area or a number that invites debate about methodology. It is a randomised experiment with pre-registered outcomes. Combined with the Foundation study's finding that brands own just 2.2% of AI citations at the discovery stage, the picture is clear: organic traffic is declining from AI Overviews, and the traffic that AI systems do generate mostly flows through third-party platforms that most brands are underinvested in.

The practical response is not to panic but to measure more precisely. The Semrush traffic channel study, the ChatGPT content research, and the SE Ranking prompt tracking framework all point to the same conclusion: you need to know which queries trigger AI Overviews for your pages, which third-party platforms AI relies on for your vertical, and whether your content mix includes the decision-support formats that AI systems actually retrieve. Those three inputs give you a realistic picture of where the risk is and where the opportunity is, which is a better starting point than either dismissing AI search or treating citation growth as the primary metric.

Free Consultation

Let's Talk


Tell me what you're working on. I'll give you an honest assessment and we'll explore if working together makes sense — no hard sell, just a free, no-obligation call to explore what's possible for your business.