ChatGPT Ads & FAQ Sunset

Search Digest Issue #011, week of 11 May 2026, Reddit AI-translated pages by market and AI visibility tracking consistency data from SparkToro

The ChatGPT self-serve ads launch and the Semrush attribution gap piece this week fit together in an uncomfortable way. AI is now a performance marketing channel with self-serve access, CPC bidding, and conversion tracking, but 58% of AI-influenced buying journeys are invisible to traditional analytics. You can now spend on ChatGPT and see the clicks. You cannot yet see most of what those sessions started. The measurement infrastructure for agentic search is about two years behind where the channel itself has moved.

The SparkToro AI tracking research from Rand Fishkin is the most important methodological correction of the week. Running the same prompt 124 times produced the same pair of brands mentioned together only twice. That is not a tracking gap. That is a signal that single-query AI "ranking" checks produce noise. The usable approach, tracking visibility percentage across 100-plus varied prompts, requires more work but gives you data worth acting on. The 2.9% web visit share context matters too: AI is worth measuring, but proportionality should govern how much resource you put into tracking it versus other channels.

The Google FAQ rich results deprecation is clean and unambiguous. If you have FAQ schema purely for Google click-through uplift, that use case closed on 7 May. And Glenn Gabe's Reddit AI translations follow-up is a reminder that Google's quality standard for AI content is enforced through authority signals, not just policy statements. Reddit's 4-5 million AI-translated pages per market are ranking because of the authority underneath the content, not because AI translation alone is sufficient.

Brooke Osmundson / Search Engine Journal

OpenAI Launches Self-Serve Ads Manager for ChatGPT

OpenAI opened ChatGPT's advertising platform to all US advertisers in May 2026, moving it beyond invite-only agency access. The self-serve Ads Manager lets brands create campaigns, set budgets, and track performance directly. Alongside the existing CPM model, the platform now offers cost-per-click bidding, so spend is tied to engagement rather than impressions. Conversions API support and pixel-based tracking are live, enabling measurement of purchases, sign-ups, and leads from ChatGPT-referred sessions, while keeping user conversations private.

The launch marks a meaningful shift for performance marketers. ChatGPT had earlier data showing tens of thousands of daily ad placements, but access was restricted. Self-serve changes the dynamic: any US advertiser can now test direct response in an AI-first context where queries are often commercial in intent. The platform is still in beta and the scale is modest compared to Google Ads or Meta, but the architecture matters. CPC bidding plus conversion tracking plus a self-serve interface is what turns an experiment into a performance channel. Early data on AI-referred traffic conversion rates had been strong, and this gives brands the tools to measure it directly rather than infer it from analytics.

Key points

  • US advertisers can now register, build campaigns, and track performance through OpenAI's self-serve Ads Manager without agency intermediaries
  • CPC bidding is available alongside CPM, allowing spend to be tied to actual engagement
  • Conversions API and pixel tracking support measurement of purchases, sign-ups, and leads
  • Advertisers receive aggregated insights only; user conversations remain private
  • The rollout is in beta with gradual expansion planned beyond the US
  • The platform provides standard ad infrastructure: self-serve access, performance tracking, and intent-matched placement

Key takeaway

Self-serve access makes ChatGPT a testable performance channel for the first time. If your audience uses ChatGPT for commercial research, testing a CPC campaign now, before competition drives costs up, is the lowest-friction opportunity to gather first-party conversion data from AI-referred traffic. Start with a tightly scoped campaign on your highest-intent queries and measure session quality rather than volume.

Also worth considering

ChatGPT's advertising launch answers one of the bigger structural questions about AI search: how does it monetise at scale? The answer appears to be a hybrid of subscriptions and a Google-like ads model. That has implications for how AI platforms will treat organic citations versus paid placements over time, and whether the current openness of AI answers to unpaid content changes as the ad model scales.

What I'm testing

Running a small CPC test on ChatGPT for specific commercial queries to understand the quality of AI-referred sessions compared to branded search. Looking at session depth and goal completions rather than volume, since the traffic numbers will be small at this stage. The baseline is more useful than the result for now.

Read the full article

Barry Schwartz / Search Engine Land

Google to No Longer Support FAQ Rich Results

Google confirmed on 8 May that FAQ rich results are gone. From 7 May 2026, FAQ-structured content no longer generates the expanded dropdown display in Google Search. Google Search Console will stop reporting on FAQ structured data. The rich results test tool drops FAQ support in June 2026 and API support ends in August. The practical implication: any click-through rate uplift from the expanded FAQ display is no longer available from Google. Removing the structured data from your site is optional, since other search engines may continue processing it, but the Google advantage is closed.

This is the final step in a slow retreat. FAQ rich results were first restricted in 2023 to government and health sites only, then visibility dropped further as Google quietly reduced how often they appeared. By the time of this announcement, most sites had already lost the benefit. The structured data still held some value as a potential Bing or secondary engine signal, but the headline use case is over. The broader pattern is worth noting: Google is continuing to narrow which structured data types produce visible SERP features. HowTo rich results went the same way. The toolset for schema that drives Google clicks is getting smaller.

Key points

  • FAQ rich results no longer appear in Google Search as of 7 May 2026
  • Google Search Console stops reporting FAQ structured data; rich results test drops FAQ support in June 2026
  • API support for FAQ testing ends in August 2026
  • Removing FAQ structured data from your site is optional, not required by Google
  • Other search engines, including Bing, may continue to process FAQ markup
  • The removal follows the same deprecation path as HowTo rich results

Key takeaway

If you have FAQ schema on pages purely for Google click-through uplift, that use case is over. Audit your FAQ structured data and decide whether it serves any purpose beyond Google, whether for Bing, voice search, or other tools that process structured data. If not, removing it reduces schema maintenance overhead without any downside.

Also worth considering

The structured data types that still produce visible Google SERP features are worth prioritising: Product, Review, Recipe, Event, and a few others. Each deprecation concentrates the value on what remains. If you have limited development resource for schema work, this is a good moment to reprioritise away from FAQ and toward the types Google is still actively supporting.

What I'm testing

Auditing pages with FAQ structured data to check whether any are also carrying deprecated HowTo schema, and pulling Search Console CTR data for those pages across the period when FAQ visibility was being quietly reduced. If there is a measurable CTR trail, it is useful baseline data for understanding what the feature was worth before it disappeared entirely.

Read the full article

Amanda Natividad / SparkToro

Office Hours: Can You Actually Track AI Visibility?

Rand Fishkin's team at SparkToro ran a series of experiments to test whether AI visibility can be tracked in any meaningful way. The headline finding is that AI "rankings" cannot. In one test, researchers ran the same question to Google AI Mode over 124 times and found the same pair of brands mentioned together only twice. Real users phrase similar queries very differently: a sample of 144 prompts with similar intent had an average semantic similarity of just 0.08, roughly comparable to the relationship between two unrelated phrases. Single-query "ranking" positions in AI are not stable enough to track, and the data those checks produce is noise.

What is trackable, the research suggests, is visibility percentage across many varied prompts. When researchers ran 100-plus diverse, authentic prompts through AI systems, which brands appeared and how often remained consistent over time, even though any individual response was not. That gives a usable signal: if your brand appears in 40% of relevant varied prompts this month and 38% next month, that is a stable trend. If it drops to 20%, something has changed. Fishkin also noted that AI tools account for just 2.9% of web visits globally, compared to 34% for organic search. The marketing conversation about AI visibility sometimes outpaces its actual traffic share, and proportionality matters when deciding where to put resource.

Key points

  • Running the same prompt 124 times produced the same pair of brands mentioned together only twice, showing AI answer sets are not stable
  • 144 prompts with similar intent had average semantic similarity of 0.08, comparable to unrelated phrases, showing real user variation
  • Single-query "rankings" in AI cannot be tracked reliably; visibility percentage across 100-plus varied prompts can
  • Brand appearance frequency across diverse prompt sets is stable over time, even when individual responses are not
  • AI tools represent 2.9% of global web visits; organic search accounts for 34%
  • Prompt pools should use varied, authentic phrasing rather than repeated identical queries

Key takeaway

Replace single-prompt AI ranking checks with visibility percentage tracked across a pool of 100-plus diverse, realistic prompts. Run prompts multiple times on different days, use varied phrasing as real users would, and track the percentage of prompts where your brand appears. That methodology produces a signal worth acting on. A single-query check does not.

Also worth considering

The 2.9% AI web visit share is a useful proportionality check for budget decisions. The case for AI visibility investment is about commercial intent and conversion quality at that 2.9%, not traffic volume. Making that distinction clearly, especially with stakeholders who may have seen AI growth headlines, prevents disappointment when visibility improvements produce small absolute traffic numbers.

What I'm testing

Building a prompt pool of 50-plus varied queries across a few topic areas to track visibility percentage over 30 days. The specific thing I want to find out is whether the pool size needed for a stable signal differs between broad informational queries, where the answer set is probably more variable, and commercial comparison queries, where AI systems may be more consistent in what they retrieve.

Read the full article

Chris Hanna / Semrush

Attribution Gap in Agentic Search: How to Close It

58% of marketplace consumers use AI tools to research products before buying, according to a ChannelEngine report. Most of those sessions leave no trace in GA4 or traditional attribution models. The article identifies two distinct problems. The first is invisible influence: AI mentions your brand, the user buys later through a different channel, and the attribution goes to direct traffic or branded search with no indication that AI started the journey. The second is agentic commerce: a purchase completed entirely inside an AI platform, such as ChatGPT or Google AI Mode, with no click-through to a retailer website at all. That session is completely invisible to any analytics tool.

The proposed measurement framework has three tiers. The first ensures AI crawlers can access your content. The second tracks brand mentions, citations, sentiment, and share of voice across AI platforms, treating AI visibility as a brand awareness channel. The third tier is where the commercial measurement happens: monitoring branded search volume, direct traffic, and self-reported attribution to infer AI influence on conversion. The 90-day implementation plan starts with baselines, adds tracking in month two, and connects visibility to business outcomes in month three. It is not a perfect solution, but it is the most structured approach available given that current analytics infrastructure was not built for agentic interactions.

Key points

  • 58% of marketplace consumers use AI tools for product research, per ChannelEngine
  • Two attribution problems: invisible influence (AI-started journeys attributed elsewhere) and agentic commerce (purchases inside AI with no click-through)
  • Three-tier measurement: AI crawler access, brand mention and citation tracking, business outcome inference
  • Self-reported attribution, asking customers how they first found you, is one of the most direct signals available
  • Branded search and direct traffic trends serve as proxy indicators for AI-influenced conversions
  • 90-day implementation plan: baselines in month one, tracking in month two, outcome connection in month three

Key takeaway

Start with the self-reported attribution question if you have any post-purchase or post-signup survey. It costs nothing to add and will produce the most direct evidence of AI influence on conversions. The other tiers are worth building toward, but this single question gives you signal immediately without any additional tooling.

Also worth considering

The agentic commerce problem is going to grow. Google's Universal Commerce Protocol checkout expanding to the main SERP this week, also covered in several outlets, is exactly the kind of feature that creates purchases without click-throughs. The brands that start measuring AI influence now, imperfectly, will have baselines when agentic commerce becomes significant enough to demand a proper attribution solution.

What I'm testing

Looking at ways to add a self-reported attribution data point to post-conversion touchpoints to find out whether any respondents mention AI tools as part of their research journey. Even a handful of responses would be more useful than the zero-data baseline most analytics setups are working from on AI influence right now.

Read the full article

Glenn Gabe / GSQI

Global Growth Gone Wild: Reddit's AI Translations Continue to Expand and Boom (One Year Later)

A year on from Glenn Gabe's first analysis of Reddit's AI translation project, the scale has roughly doubled. AI-translated pages in Google now run between 4 and 5 million per market. Spain has 5.2 million AI-translated pages ranking across 9.5 million queries. Germany has 4.7 million pages and 10.6 million queries. France sits at 4.5 million pages and 10.9 million queries. Brazil has 4.2 million pages across 10 million queries. Hreflang coverage expanded from 22 to 31 language and country targets, meaning Reddit is pushing AI translations into more markets. Google is ranking this content at scale and, based on the query volumes, ranking it well.

The reason Reddit is doing this is visible in its user growth figures. Logged-in user growth slowed to 17% year on year. Reddit needs logged-out search traffic to sustain advertising revenue, and AI translations multiply the content inventory available to index. Google has stated AI translations are acceptable "if the content is high-quality," which is the same standard it applies to all AI content. Gabe's warning is clear: do not read this as a general greenlight. Reddit has the authority, community validation, and content scale to make high-quality AI translations work. Most sites do not. Google's quality threshold applies regardless of who is trying to use the same playbook.

Key points

  • Reddit's AI-translated pages in Google grew from 2-3M to 4-5M per major market over the past year
  • Spain: 5.2M pages across 9.5M queries; Germany: 4.7M across 10.6M; France: 4.5M across 10.9M; Brazil: 4.2M across 10M
  • Hreflang coverage expanded from 22 to 31 language and country targets
  • Reddit's logged-in user growth slowed to 17% year on year, making search traffic from logged-out users strategically critical
  • Google's stated policy: AI translations are acceptable when content is high-quality
  • Reddit's success depends on underlying authority and community content signals, not AI translation alone

Key takeaway

The Reddit AI translations story is useful as a quality benchmark, not a template. What Google is rewarding is high-authority, community-validated content that happens to be translated by AI, not AI content created at scale from scratch. The signals Google trusts are the quality and authority underneath the content, not the translation mechanism. Reading it as a scaling strategy for sites without Reddit's authority base would be the wrong conclusion.

Also worth considering

Reddit's scale raises a question about search result diversity. If 5 million AI-translated Reddit pages are ranking across 9 to 10 million queries per European market, a significant share of those SERPs are showing community content produced in one language and translated for audiences who did not create it. Whether that is crowding out local publishers who do create original content in those languages is worth watching in the next Gabe follow-up.

What I'm testing

Checking whether AI-translated Reddit content shows up on queries in non-English markets where authoritative local publishers already rank. The question is whether Google's quality signal is protecting well-established local content or whether Reddit translations are appearing regardless of the local competition. Query-level SERP analysis in one European market will give a clearer picture than aggregate data.

Read the full article

That is issue #011. The ChatGPT self-serve ads launch and the Semrush attribution gap piece together form one of the more awkward situations in digital marketing: a new performance channel with proper measurement tools, sitting inside an ecosystem where most of the buying influence it generates cannot be tracked. You can now buy CPC clicks from ChatGPT. You still cannot see most of what ChatGPT does to your conversion funnel before those clicks happen. Building the attribution infrastructure, even imperfectly, starts with self-reported data and branded search trends. Those are available now.

The SparkToro methodology finding deserves to sit alongside whatever AI tracking you are already doing. If your team is reporting AI "rankings" from single-prompt checks, the research suggests those numbers are not stable enough to act on. Visibility percentage across a large, varied prompt pool is the standard the data supports. That is a higher bar but a much more honest measurement. The FAQ deprecation and the Reddit AI translations piece are the two reminders that the fundamentals still matter: structured data that drives clicks is narrowing to a smaller set of types, and AI-content quality is real and enforced, not a marketing claim.

Free Consultation

Let's Talk


Tell me what you're working on. I'll give you an honest assessment and we'll explore if working together makes sense — no hard sell, just a free, no-obligation call to explore what's possible for your business.