LLM Conversions, Discover Update & Authority Signals

Search Digest Issue #002 — week of 24 Feb 2026, LLM conversions and Discover update

The data started arriving properly this week. Two analyses published within 24 hours of each other gave us the clearest picture yet of what AI-referred traffic actually looks like in practice: small volume, fast growth, and a conversion rate that would make most paid search managers pause. The 18% conversion rate from LLM referrals in Jason Tabeling's study is not a typo. Neither is the 31% conversion advantage ChatGPT traffic holds over non-branded organic in e-commerce.

The Google Discover update completing its rollout is the other story worth sitting with. It is the first update Google has ever announced as Discover-specific, which tells you something about how they see the surface strategically. The 22-day rollout hit international publishers in ways that are worth understanding before you assume it does not apply to you.

The remaining two pieces are about where this all leads if you get the foundations right. The authority framework piece is one of the cleaner articulations I have read of how AI evaluates brand trust. And the n8n workflow article is the most practical demonstration I have seen of what "putting agents to work on SEO" actually looks like without needing a developer in the room.

Search Engine Land / Visibility Labs

ChatGPT E-Commerce Traffic Converts 31% Higher Than Non-Branded Organic Search

This is the study I have been waiting for someone to publish properly. Visibility Labs analysed 94 e-commerce sites across 12 months of 2025, and the headline figure — ChatGPT traffic converting at 1.81% versus 1.39% for non-branded organic — is backed by enough data to be taken seriously. Danny Goodwin's write-up of the findings is clear and doesn't oversell the conclusion.

The mechanism behind it, which the researchers call "intent compression," is the part worth understanding. By the time a user has asked ChatGPT what to buy, worked through the response, and clicked through to your site, they have already done most of the decision-making. They are not browsing. The visit has a purpose that most organic traffic does not have yet when it arrives.

The attribution caveat is important and underreported. A user who gets a recommendation from ChatGPT, then searches the brand on Google before buying, lands in your branded search bucket rather than your LLM referral bucket. The actual influence of AI on purchase decisions is probably higher than any data currently shows.

Key points

  • ChatGPT e-commerce traffic converted at 1.81% versus 1.39% for non-branded organic — a 31% improvement across 94 sites
  • ChatGPT visits grew 1,079% from January to December 2025 across the analysed sites
  • Average order value was 14.3% lower but revenue per session was 10.3% higher, reflecting a different purchase intent profile
  • ChatGPT accounted for just 1.48% of non-branded organic revenue — still supplementary, not a primary channel
  • Many AI-influenced conversions are almost certainly attributed to branded search, understating ChatGPT's true commercial impact

Key takeaway

If you run e-commerce and you are not tracking LLM referral traffic as a separate segment in GA4, you are missing one of the fastest-growing acquisition channels and one with above-average conversion quality. Set it up now while the volumes are still small enough to see the signal clearly.

Also worth considering

The attribution gap is the real story. Teams that understand how ChatGPT-influenced purchases flow into branded search will allocate budget more accurately than those that do not. That is a meaningful competitive edge as AI-assisted shopping becomes a larger share of total purchases.

What I'm testing

Tracking LLM referral sessions separately in GA4 and comparing time-on-site, pages per session, and checkout completion rate against other non-branded channels to see whether the conversion quality advantage holds in more detail.

Read the full article

Search Engine Land

What 13 Months of Data Reveals About LLM Traffic, Growth and Conversions

Published the day before the e-commerce study, this piece by Jason Tabeling covers a broader set of businesses and arrives at a similar conclusion through different data. LLM referral traffic converts at around 18%, well above most other acquisition channels, and the growth rate from the first half of 2025 to the second half averaged 80% across the brands tracked.

The citation shift data near the end of the piece is what I found most useful. YouTube and Reddit citations within LLM responses increased significantly in the final 30 days of the study period. That matches what I am seeing qualitatively. LLMs are increasingly pulling from conversational, community-driven sources rather than institutional ones. That changes the content strategy question considerably.

Key points

  • LLM referral traffic represents less than 2% of total referral traffic across all major models (ChatGPT, Perplexity, Gemini, Claude)
  • 18% conversion rate from LLM referrals — outperforming paid shopping, SEO, and PPC in the analysed set
  • 80% average growth from H1 to H2 2025, with some companies reporting 300% increases over the same period
  • YouTube and Reddit citations within LLM responses increased significantly toward the end of the 13-month window
  • The recommendation from the study: set up monitoring now, while trends are still visible and volume is manageable

Key takeaway

The combination of high conversion rate and fast growth makes LLM referral traffic worth treating as a primary measurement focus even while total volume is still small. The teams setting up clean monitoring infrastructure now will have months of trend data available when volumes reach decision-making scale.

Also worth considering

The shift toward YouTube and Reddit as LLM citation sources matters for content strategy. If those platforms are increasingly feeding AI answers, a meaningful presence there — even a basic, consistent one — gives AI systems more surface area to draw from when forming recommendations about your brand or topic area.

Read the full article

Search Engine Land

Google February 2026 Discover Core Update Is Now Complete

This is the first time Google has ever announced a core update as being specific to Discover rather than general web search. Barry Schwartz's coverage of the completion is worth reading alongside the original announcement because the impact data from the 22-day rollout tells a different story depending on who you are.

For US-based publishers with strong topical focus, the data suggests Discover rewarded depth and local relevance. For international publishers targeting American audiences, the picture is significantly worse. The Guardian, Reuters, and The Independent all took substantial traffic hits during the rollout period, with The Independent down 57% and The Sun down 67% at one point. The stated reason is geographic prioritisation — Google now factors in the location of the publishing site when deciding what to surface to users. That effect should moderate once the update expands globally, but it will take time.

Key points

  • First-ever Discover-specific core update — signals Google now manages Discover as a distinct surface with its own quality standards
  • Rollout ran 22 days from February 5 to February 27, eight days longer than Google's original estimate
  • Three stated goals: locally relevant content, less clickbait, and deeper content from sites with demonstrated topical expertise
  • Google explicitly said it evaluates expertise topic-by-topic — a multi-topic site can rank in Discover for areas where it has genuine depth
  • International publishers targeting US audiences saw significant traffic drops due to the geographic prioritisation change

Key takeaway

If Discover is part of your traffic mix, run a comparison from before February 5 against the period after February 27. Anything that shifted in that window is likely update-related. If you publish across many topics, focus Discover-optimisation efforts on the areas where you have genuine depth rather than trying to cover everything at moderate quality.

Also worth considering

The separation of Discover updates from core web search updates is telling. Google is treating these as distinct surfaces that can be tuned independently. Content built specifically to drive Discover opens — high-curiosity headlines, broad emotional hooks — is going to keep losing ground to content that earns repeat visits through genuine topical authority.

Read the full article

Search Engine Land

The Authority Era: How AI Is Reshaping What Ranks in Search

This piece does not have a single data point that changes everything. It is a framework article, and the framework is genuinely useful for explaining to people why their current SEO approach is not translating into AI visibility the way they expected.

The shift it describes: in traditional search, authority was largely a proxy measured in links and domain trust scores. In AI search, authority is validated externally through what real people actually say about you in places you do not control. Reddit threads, Quora answers, LinkedIn discussions, G2 reviews, YouTube commentary. AI systems pattern-match across all of those signals before forming a view on whether a brand deserves to be cited. That is a fundamentally different problem than building backlinks.

Key points

  • Authority in AI search comes from what people say about your brand in communities and platforms you do not own or control
  • Reddit, LinkedIn, YouTube, and review platforms are among the most heavily weighted sources in AI-generated recommendations
  • The gap between what a brand claims about itself and what users say about it is now a visibility problem, not just a reputation one
  • Brands that appear consistently in AI answers tend to have a track record of being mentioned in genuine discussions, not just linked to from other sites
  • The traditional SEO skill set builds one kind of authority; the AI search skill set requires building a different kind — and most teams are not doing both

Key takeaway

Run a brand audit across the platforms that AI systems draw from — Reddit, LinkedIn, YouTube, G2 or Trustpilot — before you start optimising page content for AI citations. If the external conversation about your brand is thin or mixed, content optimisation alone will not fix the underlying authority gap.

Also worth considering

A lot of the work here happens off your website entirely. Building a presence in the places AI systems mine for context — encouraging customers to write detailed reviews, having staff share expertise on LinkedIn, being present in relevant community discussions — is now part of the SEO brief whether it appears in the strategy document or not.

Read the full article

Search Engine Land

AI Agents in SEO: A Practical Workflow Walkthrough

There is a lot of "AI agents will transform SEO" content circulating right now. Most of it is theoretical. This is not. The piece walks through an actual working setup using n8n — an automation platform — to execute multi-step SEO tasks across connected systems, with one documented case generating a 28% click increase in seven days after identifying and acting on a striking-distance keyword opportunity.

The framing I found most useful: n8n is like a more capable Zapier with a language model in the middle that interprets data rather than just passing it between steps. Traditional automation moves information. Agent-based automation makes decisions about what to do with that information. That distinction matters a lot for how you scope what you build.

Key points

  • Agent platforms combine workflow orchestration with LLMs that make contextual decisions at each step, rather than simply moving data
  • n8n is positioned as the most flexible tool for SEO automation — closer to a decision-making layer than a traditional connector
  • One documented case: a striking-distance keyword was identified and optimised through an automated workflow, resulting in a 28% click increase in seven days
  • For pattern-based SEO tasks — missing meta descriptions, technical flag detection — accuracy exceeds 85%; for strategic decisions it drops significantly
  • Projects that try to build comprehensive end-to-end automation systems tend to stall — start with one narrow, high-frequency task

Key takeaway

Pick one repetitive SEO task that takes a predictable amount of time each week — rank anomaly alerts, GSC striking-distance monitoring, missing-meta sweeps — and build a focused agent workflow around that one task. A narrow win builds understanding faster than an ambitious system that never ships.

Also worth considering

The 30/70 framing in the piece is worth taking seriously. About 30% of SEO work — the data-heavy, pattern-matching, repetitive work — is genuinely automatable right now. Building skills in the 30% frees up time and attention for the 70% that requires human judgment and actually moves the needle strategically. The value is not replacing the thinking, it is protecting it.

What I'm testing

Running a simple n8n workflow that checks GSC weekly for pages that have dropped more than 20% in clicks week-on-week and flags them with a summary of likely causes based on the surrounding data. Will report back once there is enough data to judge whether the flags are useful.

Read the full article

That is issue #002. The theme this week is that we are past the point of debating whether AI search matters commercially. The conversion data is there. The growth rate is there. What is not there yet, for most teams, is the measurement infrastructure to capture it properly and the authority foundation to earn citations consistently. Those are the two things worth prioritising heading into March.

If any of these changed how you are thinking about something, or you are already tracking LLM traffic and have data worth sharing, I would like to hear about it.

Free Consultation

Let's Talk


Tell me what you're working on. I'll give you an honest assessment and we'll explore if working together makes sense — no hard sell, just a free, no-obligation call.