GSC Filter, AI Mode Ads & What AI Content Really Does

Search Digest Issue #005 — week of 17 Mar 2026, GSC filter, AI Mode advertising and content performance data

Two genuinely useful tool updates this week alongside some data that changes how you should be thinking about AI content strategy. The GSC non-branded query filter is one of those features that SEOs have been asking for long enough that it almost did not feel real when it landed. Brand query inflation has been distorting performance reporting for years — now there is a native way to strip it out.

The AI Mode advertising story is the one with longer-term consequences. Andrew Goodman's piece lays out clearly what Google's structural advantage in monetising AI search looks like compared to OpenAI and Perplexity, who are still in the early stages of building ad infrastructure that Google has had for two decades. That competitive gap matters for understanding where the bulk of AI search traffic is likely to end up.

The 16-month AI content experiment from SE Ranking is the most honest assessment of what AI-generated content actually does in search over time. The early numbers look promising — then the rankings collapse. The Wix citation study rounds out the week with clear, format-level data on what types of content AI systems prefer to cite. Taken together, this week's reading is less about theory and more about how to measure what is actually working.

Chris Long / LinkedIn

Google Search Console Now Has a Non-Branded Queries Filter — Here's How to Use It

Chris Long flagged this one in his newsletter this week, and it is the kind of update that sounds minor until you think about what it actually unlocks. Google Search Console has quietly rolled out a "Branded queries" and "Non-branded queries" toggle in the Performance report filter — under Performance, Search Results, Add Filter, Query. You can now remove brand queries from your data natively, without building custom regex filters or maintaining a list of brand terms manually.

For most sites, brand traffic inflates click-through rate, average position, and impression data in ways that make non-brand performance genuinely difficult to assess from the standard view. Anyone who has tried to build a true non-brand performance baseline by excluding terms one by one knows how much overhead that process involves. This makes it a one-click operation. The one caveat worth noting: the filter data currently only goes back to late February 2026, so historical non-brand analysis still requires the workaround approach for older data.

Key points

  • New filter lives at: Performance, Search Results, Add Filter, Query — with "Branded queries" and "Non-branded queries" as toggle options
  • Now rolling out to Search Console properties — not universally live yet, but appearing for the majority of accounts
  • Data only goes back to late February 2026 — historical non-brand analysis before that date still requires manual filtering
  • Removes the overhead of maintaining a custom brand query exclusion list, which becomes unmanageable as brand variants and misspellings accumulate
  • Non-brand CTR, average position, and impression data can now be tracked cleanly in the native interface without a spreadsheet workaround

Key takeaway

Set a non-branded baseline report now and save it as a custom view. As the historical data window extends, this becomes increasingly useful for trend analysis. If your overall GSC performance looked stable but non-brand was actually declining — masked by brand query growth — you will see it clearly for the first time.

Also worth considering

Non-brand performance is the better leading indicator of SEO health for most sites. Brand queries reflect marketing investment and brand equity rather than SEO effectiveness. If you are reporting on organic performance to stakeholders, switching to a non-branded view for core performance metrics gives a more accurate picture of what your SEO work is actually contributing.

What I'm testing

Running the non-branded filter on several sites to establish a baseline from late February forward. Particularly interested in whether non-brand performance lines up with or differs from the total-traffic view on sites that have had significant brand activity in recent months.

Read the full post

Andrew Goodman / Search Engine Land

AI Mode Is Google's Next Ads Engine — and It Already Knows How to Monetise It

Goodman's piece does not make a sensational argument. It makes a structural one. Google's advantage in monetising AI search is not just that it has AI Mode — it is that it has 25 years of ad infrastructure, deep advertiser relationships, mature bidding systems, and a Quality Score framework that its competitors are years away from matching. OpenAI is currently partnering with Criteo and The Trade Desk to build out ad capabilities from scratch. Perplexity has rejected advertising entirely. Google is iterating on top of something that already works at scale.

The five areas Goodman flags for advertisers to watch — monetisation extent, format diversity, campaign control, reporting transparency, and funnel positioning — give you a practical lens for thinking about where AI Mode advertising is currently limited and where it is likely to evolve. Light ad density, limited format options, and opaque reporting are the current state. Those gaps will narrow in the same direction they always do: toward more options, more control, and more data.

Key points

  • Google enters AI search monetisation with mature ad systems, deep advertiser adoption, and two decades of bidding optimisation that rivals are building from zero
  • OpenAI is partnering with Criteo and The Trade Desk for ad infrastructure; Perplexity has declined advertising entirely — leaving Google structurally dominant
  • Current AI Mode ad formats: text and Shopping ads; more format diversity expected as the surface matures
  • Limited campaign-level control and incomplete reporting on AI Mode performance are the two near-term friction points for advertisers
  • Conversational AI may shift customer journeys earlier in the funnel — meaning AI Mode impressions carry awareness value even before the click stage

Key takeaway

Do not wait for perfect reporting before running ads in AI Mode. Early participation builds data and familiarity with the surface before the majority of advertisers arrive. The pattern from every new Google ad format holds here: the teams that learn the mechanics early have an advantage that takes time to lose even after the format becomes standard.

Also worth considering

The Yahoo CEO's comments this week are the useful counterpoint to this piece. Jim Lanzone called AI Mode the biggest threat to publisher web traffic — specifically because it keeps users within Google's ecosystem and reduces outbound referrals to original sources. Organic and paid visibility in AI Mode may increasingly be about brand impression in a zero-click environment rather than direct traffic acquisition.

Read the full article

Danny Goodwin / Search Engine Land

Yahoo CEO: Google AI Mode Is the Biggest Threat to Web Traffic

Jim Lanzone is not a disinterested observer here — Yahoo has a competing search product — but the argument he makes is worth engaging with directly rather than discounting on that basis. His core claim: AI Mode is not just changing how users find content, it is breaking the traffic model that funds the creation of original content in the first place. "Those publishers deserve traffic, and we are not going to have the content if publishers are not healthy."

What I find interesting about Lanzone's position is the comparison he draws to Yahoo's historical dependence on Google. His warning is that embedding your product within a large platform — whether as a publisher hoping for AI citations or as an advertiser relying on AI Mode traffic — repeats the same mistake of ceding control to a system you cannot influence. Yahoo's Scout is designed specifically to send users back to original sources rather than retaining them. Whether that distinction matters to users at scale is the open question.

Key points

  • Yahoo CEO Jim Lanzone identifies Google AI Mode as "the biggest challenge" to publisher traffic — not LLMs generally, but AI Mode specifically
  • His argument: answer engines reduce publisher traffic by providing direct answers, threatening the economic model that pays for original content creation
  • Yahoo's Scout is designed to explicitly route users to original sources — a deliberate differentiation from Google AI Mode's ecosystem-retention approach
  • Lanzone's caution against embedding products within large platforms echoes Yahoo's own history with Google dependency
  • The broader tension: AI search needs quality content to function, but is reducing the traffic that funds quality content production

Key takeaway

The publisher traffic model under AI Mode is genuinely different from traditional search, and planning around it as if clicks will eventually recover is probably the wrong frame. The more productive question is what value you provide at the point of AI citation — even if the click does not follow — and how you build a business model that captures that value rather than depending solely on traffic.

Also worth considering

Lanzone's structural point about platform dependency applies broadly. Whether you are a publisher relying on Google for traffic or an SEO relying on ranking signals you cannot control, the risk profile of deep platform dependency has not changed — only the platform in question has. Diversifying traffic sources remains the most durable response to any individual platform shift.

Read the full article

Bogdan Babiak / SE Ranking

How AI-Generated Content Actually Performs in Google Search: A 16-Month Experiment

This is the most honest long-running assessment of AI content performance I have seen published. SE Ranking tracked 2,000 AI-generated articles across 20 new domains with zero domain authority over 16 months — no human intervention, no backlinks, no author credentials, no editorial input. The results follow a pattern that should make anyone rethink if they have sold or bought into "publish AI content at scale and watch the traffic grow."

The early numbers are genuinely encouraging. 71% of pages indexed within 36 days. A business-focused site went from 458 to 7,750 impressions in a single month — a 17x increase. A law-focused site went from 19 to 356 impressions. Fresh content appeared to signal site activity to Google in ways that boosted impressions on older pages, not just the new ones. Then month three arrives. By month three, only 3% of those pages remained in the top 100 search results. Long-term visibility collapsed without E-E-A-T signals, unique insight, or anything that distinguishes the content from a thousand similar AI-generated pieces.

Key points

  • 71% of AI-generated pages indexed within 36 days — early indexation is not the bottleneck for AI content
  • Initial impressions growth was strong: one site went from 458 to 7,750 impressions in a single month (17x increase)
  • By month three, only 3% of pages remained in the top 100 search results — rankings collapsed without authority signals
  • Publishing fresh AI content on established sites boosted impressions on older pages by up to 19x — site activity signalling may explain early wins
  • Without human editorial input, unique perspectives, or E-E-A-T signals, AI content cannot sustain rankings past the initial indexation phase

Key takeaway

AI content gets you indexed quickly and generates early impressions — but it does not hold rankings without editorial substance behind it. Use AI for research, structuring, and drafting. The human layer — genuine perspective, original data, editorial standards, author credentials — is what converts initial impressions into sustained organic presence.

Also worth considering

The finding that publishing new AI content boosted impressions on older existing pages is genuinely interesting and probably underreported. If your older content library is stagnating, a careful programme of new content — even with significant AI involvement in production — may signal activity to Google in ways that benefit your existing pages. That is a different use case from "publish AI content and expect it to rank independently."

Read the full article

Danny Goodwin / Search Engine Land (Wix Studio Research)

AI Citations Favour Listicles, Articles and Product Pages: 75,000-Answer Study

Wix Studio's AI Search Lab published a study covering 75,000 AI answers and over one million citations across ChatGPT, Google AI Mode, and Perplexity. The format distribution is the most actionable finding: listicles account for 21.9% of all AI citations, articles 16.7%, and product pages 13.7%. Those three formats together account for 52% of all AI citations across the dataset.

What makes this more than a simple "write listicles" conclusion is the query intent layer. Intent — not format, not industry, not AI platform — is the strongest predictor of which content gets cited. Informational queries prefer articles by 2.7 times over other formats. Commercial queries lean heavily toward listicles, which capture 40% of citations on commercial intent searches. Transactional and navigational queries go to product and category pages. The platform variation is also worth noting: Perplexity pulls from Reddit and forum discussions at 17% — significantly higher than ChatGPT or AI Mode.

Key points

  • Three formats dominate AI citations across all platforms: listicles (21.9%), articles (16.7%), and product pages (13.7%) — together accounting for 52% of citations
  • Query intent is the strongest predictor of citation format — stronger than industry vertical or AI platform choice
  • Informational queries: articles cited 2.7x more than other formats; commercial queries: listicles capture 40% of citations
  • Perplexity cites Reddit and forums at 17% of its mentions — substantially higher than ChatGPT and Google AI Mode
  • Third-party listicles (editorial comparisons) account for 80.9% of professional services citations — AI systems prefer neutral comparisons over self-promotional brand content

Key takeaway

Match your content format to the intent of the queries you want to appear in. If your commercial intent pages are built as long-form articles, you are competing in the wrong format. If your informational content is structured as a listicle when it should be a deep article, the same applies. Intent-format alignment is a more direct lever on AI citation probability than most other content changes.

Also worth considering

The third-party listicle finding is significant for brands in competitive categories. If 80.9% of professional services AI citations come from neutral comparison listicles rather than brand-owned content, building a presence in those lists — through genuine product quality, review generation, and PR — matters more than optimising your own pages for those queries. You cannot always own the citation; sometimes you need to be in the content that earns it.

What I'm testing

Auditing the format distribution of pages that currently rank but are absent from AI citations, mapped against the query intent for those pages. Looking for intent-format mismatches as the first explanation before moving to content quality or technical factors.

Read the full article

That is issue #005. Five issues in and the picture is getting clearer: AI search is not going to replace organic traffic so much as redistribute it, concentrate it in fewer formats, and change what earns a citation versus what earns a click. The GSC filter means we can now measure non-brand performance cleanly for the first time. The content experiment gives us honest data on what AI content does over time. The citation format study gives us a format map to work from.

If any of this changed how you are approaching your measurement setup or content strategy, I would like to hear what you are doing with it.

Free Consultation

Let's Talk


Tell me what you're working on. I'll give you an honest assessment and we'll explore if working together makes sense — no hard sell, just a free, no-obligation call.