LinkedIn Tops AI Citations, Google Rewrites Headlines & the March 2026 Spam Update

Search Digest Issue #006 — week of 23 Mar 2026, LinkedIn citation rates by AI platform and Google's AI headline rewrite progression from Discover to Search

The LinkedIn finding this week is the one that deserves proper attention. SEMrush's analysis of 325,000 prompts across ChatGPT Search, Google AI Mode, and Perplexity put a number on something a lot of practitioners have suspected: LinkedIn is now the most-cited professional domain across every major AI platform. ChatGPT Search cites it in 14.3% of responses, AI Mode in 13.5%. That puts LinkedIn ahead of Wikipedia, YouTube, and every major news publisher for professional queries. If GEO is part of your programme and LinkedIn is not, that is an execution gap worth closing.

The AI headline rewrite story is worth tracking closely. Google confirmed to The Verge that it is testing AI-generated headline rewrites in traditional search results — not pulling from on-page elements but generating new titles entirely. The pattern is identical to its Discover rollout: small experiment in December 2025, reclassified as a feature in January 2026 after positive user signals, now showing up in Search. If that timeline repeats, we are a month or two away from AI headline rewrites becoming standard in the main results. There is no opt-out.

The rest of the week covers ground that affects audit queues and planning: Google's March 2026 spam update (general spam policy violations, not link spam), AI Mode expanding to all US free users with personalised Gmail and Photos access, and a clear-headed look at why most SEO programmes fail not because of algorithm changes but because of internal organisational problems that nobody wants to name directly.

Barry Schwartz / Search Engine Roundtable

Google March 2026 Spam Update Rolls Out

Google released the March 2026 spam update on March 24 at around 3:20pm ET. It applies globally across all languages and is expected to complete within days — not weeks — based on the rollout language Google used. This targets sites violating Google's general search spam policies: thin content, hidden text, cloaking, structured data misuse, and similar violations. Google described it as "a normal spam update."

The more useful detail is what this update explicitly does not target: link spam and site reputation abuse are both excluded. Those have their own dedicated updates. That distinction matters for triage — if you see movement from this update, you are looking at general spam compliance issues, not your link profile. Google declined to say what share of queries are affected, and recovery from confirmed spam actions typically takes months with periodic refreshes rather than a single clean pass.

Key points

  • Launched March 24, 2026 — rollout expected in days; previous spam updates have ranged from 24 hours to 29 days
  • Targets general spam policy violations across all languages and regions — thin content, cloaking, hidden text, structured data abuse
  • Does NOT target link spam or site reputation abuse — both have separate dedicated updates
  • Google declined to disclose impact on query volume
  • Recovery from a spam action takes months and applies through periodic refreshes, not a single reversal
  • Most recent prior spam update: August 2025

Key takeaway

If you have not audited your site against Google's spam policies recently, this is a prompt. The absence of a link spam component means general compliance — structured data accuracy, content quality, no hidden elements — is the area to review. Sites that have been building AI content at scale without editorial oversight are in the highest-risk category for this type of update.

Also worth considering

The pattern of running separate spam updates for different violation categories — general spam, link spam, site reputation — suggests Google is getting more precise about targeting specific policy areas rather than running broad sweeps. That is useful information for triage: if a spam update moves you, knowing which category it targets helps you find the cause faster than a site-wide audit.

Read the full article

Matt G. Southern / Search Engine Journal

Google Tested AI Headlines in Discover. Now It's Testing Them in Search

Google confirmed to The Verge that it is running a "small and narrow" test of AI-generated headline rewrites in traditional search results. This is not the existing system that pulls from H1 tags or meta descriptions — the AI is generating entirely new titles from scratch. One documented example: an article titled "I used the 'cheat on everything' AI tool and it didn't help me cheat on anything" was rewritten to "'Cheat on everything' AI tool" — a phrase that never appeared in the original content. Google's stated goal is to "identify content on a page that would be a useful and relevant title to users' query."

The Discover precedent is instructive. Google described AI headlines in Discover as a "small UI experiment" in December 2025. By January 2026, after showing good user satisfaction, it was reclassified as a feature. The same framing — "small and narrow" — is now being applied to Search. Publishers have no disclosure when their headlines are rewritten and no opt-out mechanism. Discover already accounts for roughly 68% of Google-sourced traffic for major publishers, meaning AI headline rewrites are already live on the channel that sends most of their referrals.

Key points

  • Confirmed by Google to The Verge — not speculation; the "small and narrow" framing matches the Discover test language exactly
  • AI generates entirely new headlines — not pulling from H1 tags, meta titles, or on-page text that exists on the page
  • Discover path: test (Dec 2025) → feature (Jan 2026); Search test follows the same pattern one month later
  • Discover accounts for ~68% of Google-sourced traffic for major publishers — AI headline rewrites are already affecting that channel
  • No disclosure when a headline is rewritten; no opt-out mechanism available to publishers

Key takeaway

Check your top pages in Google Search manually to see whether your headlines are appearing as written. If AI headline rewrites follow the Discover timeline into a Search feature, you will have little notice and no mechanism to prevent it. The best position is knowing what your pages currently show before the change happens at scale.

Also worth considering

If Google decides what headline best serves a user's query rather than what the publisher wrote, it is a further shift toward Google as publisher rather than as index. Content creators supply raw material; Google increasingly controls the presentation layer. The implications for brand messaging, legal copy accuracy, and editorial control are worth thinking through before this moves from test to feature.

What I'm testing

Spot-checking high-traffic informational pages against their SERP appearance to document any headline changes before they become widespread. Worth doing now to establish a baseline, even if the test is described as narrow.

Read the full article

Search Engine Journal / SEO Pulse

Google AI Mode Goes Personal, Crawl Limits Clarified

Two practical technical updates from this week's SEO Pulse worth noting separately. First: Google expanded Personal Intelligence from paid AI Pro and Ultra subscribers to all free US users on personal Google accounts as of March 20. AI Mode can now access Gmail and Google Photos to personalise responses — referencing email confirmations, travel bookings, photo context. This is limited to US personal accounts for now; Workspace accounts are not included.

The crawl limits clarification is the more durable of the two. Technical SEOs have long worked with Google's stated 15 megabyte crawl limit. Gary Illyes and Martin Splitt confirmed this week that the practical threshold in Google Search is closer to 2 megabytes — and that internal teams can override the 15MB figure. The 15MB limit is a default, not a hard ceiling. For most pages this changes nothing in practice, but for sites with heavy JavaScript payloads, large inline scripts, or dense HTML structures, 2MB is the number to work against.

Key points

  • Personal Intelligence available to all free US Google account users from March 20, 2026 — not Workspace accounts
  • AI Mode can now reference Gmail, Google Photos for personalised answers — travel, appointments, bookings
  • Personalised AI Mode results mean your test environment may not match what real users see — relevant for GEO monitoring
  • Practical crawl threshold confirmed at ~2MB by Gary Illyes and Martin Splitt — the 15MB figure is an overridable default
  • Heavy JavaScript payloads and large inline scripts are the primary risk area for the 2MB threshold

Key takeaway

Use 2MB as your working page size limit for technical audits, not 15MB. For AI Mode research and GEO testing, account for the fact that personalised results will now vary between signed-in users — your test results may not reflect what your target audience actually sees.

Also worth considering

Personal Intelligence expands Google's data advantage in AI search significantly. The more context AI Mode has access to, the better it resolves ambiguous queries without the user needing to provide background. That shifts where AI Mode answers are most useful — earlier in the research phase, when users still have open questions rather than specific intent. For brands, being present in those early-research conversations matters more as personal context makes the answers more accurate.

Read the full article

Maria Georgieva / Search Engine Land

SEO's Biggest Threat in 2026? Your Own Organization

Georgieva's argument here is uncomfortable because it is accurate. While the industry spends considerable energy debating GEO tactics and AI Mode strategy, most SEO programmes in 2026 will fail for reasons that have nothing to do with algorithms: unclear ownership, fragmented data, misaligned KPIs, and the gap between knowing what to do and getting it done. She documents a specific case where cross-functional teams "hadn't even read the strategy document" — which produced a GEO initiative that stalled at the planning stage with no implementation.

The AI over-reliance point is worth pulling out separately. When every team uses the same tools to answer similar questions, content outputs start to converge. That generic output is exactly what large language models are least likely to cite — AI citation systems favour content with specific claims, original data, and clear authorial perspective. The practical recommendations are not glamorous: tie visibility metrics to business outcomes, assign ownership explicitly, run smaller experiments rather than lengthy strategy documents, and train non-SEO teams on why any of this matters to their work.

Key points

  • Internal failures — data fragmentation, unclear ownership, misaligned KPIs — are the most common reason SEO strategies do not execute
  • AI over-reliance risks content homogenisation: similar prompts produce similar outputs, removing the differentiation that AI citation systems favour
  • User journeys now extend into AI tools before reaching websites — most web analytics miss this invisible decision-making phase
  • Cross-functional KPI alignment is consistently absent from GEO and AI visibility programmes
  • Faster experimentation outperforms comprehensive strategy documents — the strategy-to-execution gap is where most programmes stall

Key takeaway

Before adding more tactics to your GEO programme, map the internal execution blockers: who owns AI visibility strategy, how success is measured, and whether the teams responsible for implementation are aligned on why it matters. A strategy document nobody reads is not strategy — it is documentation.

Also worth considering

The invisible decision-making point is underappreciated. If users are forming opinions inside AI tools before they reach your website, your web analytics are showing you a fraction of the customer journey. Measuring the full shape of that journey — including the AI-mediated phases — requires tools and approaches that most teams do not yet have in place. That gap between what is happening and what is being measured is itself an organisational problem.

What I'm testing

Running a quick ownership audit across two client accounts: who is responsible for AI visibility strategy, what success metrics they use, and whether those metrics connect to business outcomes rather than just ranking or impression data. The answer is usually more complicated than it should be.

Read the full article

SEMrush

We Analyzed 89K LinkedIn URLs Cited in AI Search: Here's What Drives Visibility

This is the most data-rich GEO study published this year. SEMrush analyzed 89,000 unique LinkedIn URLs cited across 325,000 prompts in ChatGPT Search, Google AI Mode, and Perplexity. The headline finding: LinkedIn appears in 14.3% of ChatGPT Search responses, 13.5% of Google AI Mode responses, and 5.3% of Perplexity responses for professional queries — ahead of Wikipedia, YouTube, and every major news publisher. Corroborating data from Profound's separate 1.4 million citation analysis shows LinkedIn moving from outside the top 20 on ChatGPT in November 2025 to around position five by February 2026.

The content type findings are where this becomes actionable. 95% of LinkedIn citations go to original posts — reshares are nearly invisible to AI citation systems. Long-form articles between 500 and 2,000 words dominate citation rates for longer content; feed posts perform best between 50 and 299 words. Cited authors post 5 or more times per month in 75% of cases. Follower count does not reliably predict citation frequency — smaller accounts with high content quality do get cited. The content type shift is also significant: profile page citations dropped from 33.9% to 14.5% between November 2025 and February 2026, while posts and articles combined rose from 26.9% to 34.9%. AI systems are moving from static identity signals to dynamic content signals.

Key points

  • LinkedIn appears in 14.3% of ChatGPT Search, 13.5% of Google AI Mode, and 5.3% of Perplexity professional query responses — ahead of all major news publishers
  • 95% of citations go to original posts; reshares account for only 5% — creating original content is a hard requirement for AI citation
  • Optimal length: articles 500–2,000 words; feed posts 50–299 words — both ranges suggest AI systems favour focused, substantive content
  • 75% of cited authors post 5+ times per month — publishing consistency matters more than individual post virality
  • Cited posts typically receive just 15–25 reactions — engagement count does not predict citation frequency
  • Profile page citations fell from 33.9% to 14.5% (Nov 2025–Feb 2026); posts and articles rose from 26.9% to 34.9% — dynamic content now outweighs static identity

Key takeaway

LinkedIn is a GEO channel now, not just a networking platform. If your clients have a professional audience, consistent original content on LinkedIn — at the right length and published with genuine frequency — is directly connected to AI citation probability. That is worth treating as a content investment with the same rigour as on-site SEO, not a social media afterthought.

Also worth considering

The drop in profile page citations alongside the rise in post and article citations is a useful signal about how AI systems evaluate presence on any platform. Static identity pages are table stakes — what gets cited is what you actively publish. The same logic applies to your own website: a well-structured about page and a clear entity profile help with comprehension, but the content you produce is what drives citation probability over time.

What I'm testing

Auditing LinkedIn content production for two clients with professional audiences — comparing post frequency, article length, and content type against the citation benchmarks in this study. Looking for quick wins in post length and original article creation before recommending a fuller content programme.

Read the full article

That is issue #006. The LinkedIn finding is the one to act on — not as a trend piece but as a GEO data point with 325,000 prompts behind it. If you are not publishing original content on LinkedIn consistently and your audience is professional, that is a gap worth closing before it becomes a competitive disadvantage. The headline rewrite story is worth monitoring carefully: the Discover-to-Search pattern is now confirmed, and the timeline from test to feature has been under two months.

The spam update, AI Mode personalisation, and the internal org piece are all useful friction for this week's audit and planning queue. If any of this changes how you are thinking about your GEO programme or your measurement setup, I would like to hear what you are doing with it.

Free Consultation

Let's Talk


Tell me what you're working on. I'll give you an honest assessment and we'll explore if working together makes sense — no hard sell, just a free, no-obligation call.