AEO, AI Mode Self-Citation & ChatGPT's Invisible Queries

Search Digest Issue #008, week of 20 Apr 2026, AI Mode Google self-citation growth from 5.7% to 17.42% and March core update volatility comparison bar charts

The SE Ranking finding is the one that deserves the most attention this week. Across 68,313 keywords and 1.3 million AI Mode citations, Google now references its own properties in 17.42% of responses. That is up from 5.7% in June 2025, a threefold increase in under ten months. The shift is also structural: the mix has changed from mostly Google Business Profile links to mostly organic SERP links, meaning Google is now routing AI Mode users back into its own search results rather than to direct answers. That changes the citation landscape for everyone competing for AI Mode visibility.

The AEO piece from Google's own engineering director is the most practically useful article of the week. Addy Osmani sets out specific token limits for AI-agent-ready content, quick starts under 15,000 tokens, conceptual guides under 20,000, API references under 25,000, with answers front-loaded within the first 500 tokens. That is the first time Google has put numbers to this. The March 2026 core update data rounds out the picture on the rankings side: 79.5% of top-3 positions shifted, nearly 1 in 4 top-10 pages fell out of the top 100, and official institutions and established brands were the clear winners.

Kevin Indig's finding about ChatGPT's invisible query space is worth sitting with. Between 65% and 85% of prompts sent to ChatGPT have no matching keyword in Semrush's database, meaning traditional keyword research is missing the majority of the conversational demand that AI systems are actually answering. Add Google's worldwide rollout of AI Mode restaurant booking and the picture becomes clear: the commercial layer of AI search is expanding fast, and the measurement tools have not caught up.

SE Ranking

Is AI Mode Now Dominated by Google's Self-Referencing Links?

SE Ranking analysed 68,313 keywords across 20 niches and collected 1,321,398 citations from Google's AI Mode. Google.com is the single most-cited domain at 17.42% of all responses, more than the next six domains combined. That share has tripled since June 2025, when it sat at 5.7%. The composition has also shifted: previously, 97.9% of Google's self-citations pointed to Google Business Profiles. Now only 36.1% do, while 59% point to organic search result pages. Google is using AI Mode to send users back into its own search funnel, not just to local listings.

The niche data makes the pattern even clearer. Google ranks as the top-cited source in 19 of 20 niches analysed. The highest self-citation rates appear in Travel at 53.18%, Entertainment at 48.74%, and Real Estate at 30.54%. For anyone doing GEO work in those verticals, Google's own properties are now a significant competitor for AI Mode citation slots. The study does not cover what Google will cite if its own properties are unavailable, but the direction of travel is clear: Google is building AI Mode into a closed loop that favours its own ecosystem.

Key points

  • Google.com accounts for 17.42% of all AI Mode citations, up from 5.7% in June 2025, a 3x increase in under 10 months
  • Google is cited more than the next six domains combined across the full dataset of 1.3 million citations
  • Citation type shifted from Google Business Profiles (97.9%) to organic SERPs (59%), routing users back into traditional search
  • Google is the top-cited source in 19 of 20 niches, with Travel (53.18%), Entertainment (48.74%), and Real Estate (30.54%) highest
  • 68,313 keywords analysed across 20 niches using SE Ranking's AI Mode Tracker

Key takeaway

If you are building a GEO strategy for Travel, Entertainment, or Real Estate, you are now competing directly with Google's own properties for AI Mode citation slots. That changes the calculus significantly: optimising for AI citations in those niches is not just about outperforming other publishers, it is about being cited in the slots Google does not fill with its own results. Knowing which niches have lower Google self-citation rates helps you prioritise where the GEO opportunity is actually open.

Also worth considering

The shift from Business Profile citations to organic SERP citations is worth watching closely. If Google is directing AI Mode users back to search result pages, it is preserving the search session and the ad inventory that comes with it. That is a commercial decision as much as a product one, and it suggests AI Mode is being designed to support Google's search business rather than replace it.

What I'm testing

Checking which niches in my own work have the lowest Google self-citation rates in AI Mode, since those are the verticals where external content has the best chance of appearing. Running queries in AI Mode for target topics and recording whether Google properties dominate the citations or whether third-party sources get meaningful slots.

Read the full article

Danny Goodwin / Search Engine Land

Agentic engine optimization: Google AI director outlines new content playbook

Addy Osmani, Director of Engineering at Google Cloud AI, has published a framework for what he calls Agentic Engine Optimisation, or AEO, the practice of structuring content so AI agents can use it rather than just render it. The core constraint is token limits: AI agents operate within fixed context windows and truncate or skip content that exceeds them. Osmani's specific guidance is quick starts under 15,000 tokens, conceptual guides under 20,000 tokens, API references under 25,000 tokens. Answers should be front-loaded within the first 500 tokens, because agents have limited patience for preamble. Discovery files such as llms.txt, skill.md, and AGENTS.md help agents understand what your content covers before parsing it.

One important caveat from the article: Google's John Mueller has said Google Search does not use llms.txt and has discouraged creating separate markdown pages for SEO purposes. The AEO framework is aimed at AI agents that retrieve and act on content, not at improving Google search rankings directly. That distinction matters. The token limits and discovery files Osmani describes are relevant for content that AI coding tools, research agents, and API-browsing bots consume, which is a different audience from Google's web crawler. If your content is used by developers, researchers, or technical AI tools, AEO applies directly. For pure organic search, the framework is less relevant in the short term.

Key points

  • AEO (Agentic Engine Optimisation) is defined as structuring content for AI agents to use, not just for humans to read
  • Token limits: quick starts under 15,000 tokens, conceptual guides under 20,000, API references under 25,000
  • Front-load answers within the first 500 tokens — agents skip content that opens with preamble
  • Discovery files (llms.txt, skill.md, AGENTS.md) tell agents what content is available before they parse it
  • Markdown format is preferred over HTML for AI agent consumption
  • Google's John Mueller has clarified AEO is unrelated to Google Search rankings — this is for AI agents, not web crawlers

Key takeaway

If you produce technical documentation, developer guides, or any content that AI tools or coding agents consume, the AEO token limits are worth applying now. For general SEO content, the immediate ranking impact is minimal, but the principle of front-loading answers within 500 tokens is good practice regardless, both for AI agents and for human readers who scan before they read.

Also worth considering

The fact that Google's own Cloud AI director is publishing an AEO framework while Google's Search team says it does not apply to web rankings shows how fragmented Google's guidance has become. Two parts of the same company are giving practitioners different signals. The safest interpretation is that AEO matters for AI agent retrieval today and may matter for search ranking in the future, but treating it as an SEO priority right now would be getting ahead of what Google Search actually does.

What I'm testing

Auditing a sample of long-form guides to check whether the most important answer or finding appears within the first 500 tokens, roughly the first 375 words. Most content buries the key claim in the body after an introduction. Restructuring to put the answer first and the supporting detail after is the change most likely to improve both AI agent use and human readability.

Read the full article

Search Engine Journal / SISTRIX

Google March Core Update Left 4 Losers For Every Winner In Germany

SISTRIX analysed 1,371 domains that showed significant visibility changes during the March 2026 core update, applying a strict filter: 52 weeks of Visibility Index history, 30 days of daily data, and visual confirmation of each domain's trend. Of the 166 domains that passed the filter, 134 lost visibility and 32 gained it, a ratio of roughly 4 losers to every 1 winner. The pattern is consistent with a broader data point from Search Engine Land: 79.5% of top-3 positions changed during the update, compared to 66.8% after December's core update, and approximately 24.1% of top-10 pages dropped out of the top 100 entirely, versus 14.7% after December.

The winners were almost uniformly official sources and established brands. Audible.de gained 172% visibility. Four German airport websites each grew between 8% and 22%. Government websites saw gains of 5% to 8%. The losers show a different profile: online shops (39 of 134 losers, spread across fashion, electronics, and gardening), language and education tools, recipe portals, and user-generated content platforms. The pattern matches what Aleyda Solis described in the US data: "visibility shifted from intermediary sites toward stronger destination sources." Sites that aggregate or facilitate are losing ground to sites that are the primary destination.

Key points

  • 134 losers and 32 winners in SISTRIX's filtered German domain set, a 4:1 ratio
  • 79.5% of top-3 positions changed in the March update, compared to 66.8% after December 2025
  • 24.1% of top-10 pages dropped out of the top 100, up from 14.7% after December
  • Winners: official institutions, established brands, primary destination sites (Audible.de +172%, airport websites +8–22%)
  • Losers: intermediary sites, aggregators, online shops, user-generated content platforms, recipe portals
  • Update ran March 27 to April 8, 2026, taking 12 days and 4 hours

Key takeaway

The March update continues the pattern from the December and February updates: Google is rewarding sites that are the final destination for a query and penalising sites that sit between the user and the primary source. If your site aggregates, compares, or facilitates rather than being the authoritative source on a topic, this update is a signal to look at your positioning. Being the destination, not the intermediary, is the consistent winner across the last three core updates.

Also worth considering

The 4:1 loser-to-winner ratio is a useful number for setting expectations. Core updates are not symmetric. Most sites that experience a meaningful change during a core update lose rather than gain, which means broad volatility is not a neutral event for a portfolio of sites. The practical implication is that recovery from a core update loss is the more common problem to plan for, not capitalising on gains.

What I'm testing

Looking at which pages on affected sites are classified as intermediary pages, ones that primarily link out, compare options, or aggregate content from elsewhere, versus pages that are themselves the primary source of information on a topic. The distinction between "destination page" and "intermediary page" is a useful lens for prioritising where to focus editorial effort after a core update.

Read the full article

Kevin Indig / Growth Memo

Growth Intelligence Brief #17

Kevin Indig's latest brief looks at a Semrush analysis of 17 months of ChatGPT usage data. The headline finding: between 65% and 85% of all ChatGPT prompts have no matching keyword in Semrush's database. They are entirely new discovery pathways, mostly long, conversational, context-rich queries that standard keyword research tools have never captured. In the same period, ChatGPT outbound referral traffic grew 206% year on year, from January 2025 to January 2026. Traffic that flows from ChatGPT back to Google has also grown, from 14% to over 21% of ChatGPT sessions, suggesting users treat ChatGPT as a research entry point before returning to search.

Indig's argument is that keyword optimisation alone gives an incomplete picture of how users are discovering content through AI. If the majority of AI-originated queries are invisible to the tools SEOs use to understand demand, then content strategy built entirely on keyword data is missing a substantial portion of what users are actually asking. The practical challenge is that there is no keyword research tool equivalent for ChatGPT prompts: the signal comes from AI citation tracking and referral analytics rather than keyword volume data. That requires a different set of tools and a different way of thinking about content coverage.

Key points

  • 65–85% of ChatGPT prompts have no matching keyword in Semrush's database, based on 17 months of usage data
  • ChatGPT outbound referral traffic grew 206% year on year (January 2025 to January 2026)
  • Traffic returning to Google from ChatGPT grew from 14% to over 21%, showing AI and search as complementary, not competing
  • The invisible majority of prompts tend to be long, conversational, and context-rich, the format that favours authoritative, in-depth content
  • Standard keyword research provides no coverage of this demand: it must be tracked through AI citation tools and referral analytics

Key takeaway

Set up AI citation monitoring alongside your keyword tracking. Tools like Ahrefs' AI Overviews tracker and Semrush's AI visibility reports give partial coverage of what AI systems are citing, and GA4 referral data from ChatGPT domains tells you which pages are already receiving that traffic. That data is more informative for understanding AI-era demand than keyword volume alone.

Also worth considering

The 206% growth in ChatGPT referral traffic looks impressive, but the absolute volume is still small for most sites. The more important signal is the 14% to 21% increase in ChatGPT-to-Google handoffs, because it confirms that AI and search are operating as a connected pipeline rather than as alternatives. If your site is being cited in ChatGPT, you are likely also benefiting indirectly from search sessions that start in AI and complete in Google. That is a reason to track both channels together, not separately.

What I'm testing

Pulling together sales call recordings and support conversations to identify the specific questions customers ask that are not represented in any keyword tool. Those questions are exactly the type of long, context-rich prompts that show up in ChatGPT but not in search data. Publishing direct answers to those questions, rather than optimising for related keywords, is the content move most likely to capture AI-originated demand that standard research misses.

Read the full article

Semrush

Google rolls out worldwide agentic restaurant booking via AI Mode

Google has expanded its agentic restaurant booking feature in AI Mode to a global audience, removing both the geographic restriction that previously limited it to US users and the Google AI Ultra subscription requirement. The feature, which launched in August 2025 for a small subset of US subscribers, now allows any user to complete a restaurant reservation directly through an AI Mode response via OpenTable and SevenRooms integrations, without visiting the restaurant's website or the booking platform directly. Rose Yao, Google's Vice President of Product Management, announced the global rollout on April 10, 2026.

The commercial implication is direct. Restaurants that have not integrated with Reserve with Google are now invisible to users who complete their booking through AI Mode. The session ends before those users ever visit an external site. This is AI Mode functioning as a conversion layer that sits on top of the open web, handling the transaction without a click-through. It is the first large-scale, publicly available example of AI Mode completing a commercial action on a user's behalf rather than just presenting information. The pattern is likely to expand to other verticals, with flights, hotels, and ticketing being the obvious next candidates.

Key points

  • Google's AI Mode agentic restaurant booking is now available globally, with no subscription required
  • Feature integrates with OpenTable and SevenRooms, completing reservations without the user leaving AI Mode
  • Previously limited to US subscribers with Google AI Ultra since August 2025
  • Restaurants without Reserve with Google integration are bypassed entirely in this booking flow
  • This is the first widely available example of AI Mode acting as a commercial transaction layer, not just an answer layer
  • Expansion to other booking verticals, such as flights, hotels, and events, is the likely next step

Key takeaway

If you work with businesses that take bookings, check whether they are integrated with Reserve with Google. The technical barrier is low, but the window for being present in this flow before it becomes standard user behaviour is closing. Businesses that integrate early will have the data and the placement; those that wait will be retrofitting after the habit is established.

Also worth considering

The broader signal here is that AI Mode is being used to absorb commercial intent, not just informational queries. If Google follows the restaurant booking model in other booking categories, the share of commercial queries that result in zero-click conversions will rise significantly. That changes how businesses should think about measuring AI Mode performance: it is not just a source of traffic or citations, it is becoming a source of direct transactions that are invisible to standard analytics unless you are tracking Reserve with Google data.

What I'm testing

Checking whether reservation and booking pages on client sites are eligible for Reserve with Google integration, and whether the structured data needed to surface in AI Mode booking flows is in place. The schema requirements for agentic booking eligibility are worth understanding now, before this pattern extends beyond restaurants.

Read the full article

That is issue #008. The SE Ranking AI Mode self-citation data is the finding I keep coming back to. A 3x increase in under ten months, moving from 5.7% to 17.42%, with the citation type shifting from Business Profiles to organic SERPs, is a meaningful structural change in how AI Mode directs user attention. If you are doing GEO work, knowing which niches Google fills with its own citations and which it leaves open to third parties is now a prerequisite for setting realistic expectations about what is achievable.

The AEO framework and the ChatGPT invisible query finding both point in the same direction: content optimisation for AI consumption requires different inputs and different metrics than traditional SEO. Token limits, front-loaded answers, and AI citation tracking are the new additions to the toolkit. None of them replace the fundamentals, the core update data confirms that being a primary destination source still matters as much as it ever did, but they sit on top of a healthy organic foundation, not instead of one.

Free Consultation

Let's Talk


Tell me what you're working on. I'll give you an honest assessment and we'll explore if working together makes sense — no hard sell, just a free, no-obligation call to explore what's possible for your business.