AI Overviews SEO: What Changes for Organic Traffic
AI Overviews SEO is the practice of optimising content to appear inside Google’s AI-generated answer blocks, which now appear at the top of results pages for a significant share of queries. Unlike traditional blue-link optimisation, the goal is not to rank first but to be cited as a source inside an answer the user may never scroll past.
That distinction matters commercially. If Google is summarising your content and answering the question on your behalf, the traffic model you have been operating under for the last decade has changed. The question is not whether to adapt, but how quickly you can get clear-eyed about what the change actually means for your business.
Key Takeaways
- Appearing in AI Overviews requires structured, authoritative content, not just high rankings. A page can rank on page one and still be ignored by the AI layer.
- Zero-click behaviour is accelerating. Optimising for citation inside AI answers is now as important as optimising for click-through from organic listings.
- Content that answers specific, well-scoped questions with clear structure is consistently more likely to be pulled into AI-generated summaries.
- E-E-A-T signals, particularly first-hand experience and demonstrable expertise, are becoming more critical as AI systems prioritise sourcing credible, attributable content.
- The brands most exposed are those whose organic strategy is built entirely on informational traffic. Transactional and brand-driven content is more insulated, for now.
In This Article
- What AI Overviews Actually Do to Organic Traffic
- How Google Decides What Goes Into an AI Overview
- The Citation Problem Most Brands Are Ignoring
- What Good AI Overviews Optimisation Looks Like in Practice
- The E-E-A-T Angle Is Not Soft, It Is Structural
- How AI Overviews Are Changing Keyword Strategy
- Measuring the Right Things When AI Overviews Change the Metrics
- The Bigger Picture for Marketing Leaders
What AI Overviews Actually Do to Organic Traffic
When I was running iProspect, we would talk about position one as the gold standard. Get there and the traffic followed, more or less predictably. That model held for a long time. AI Overviews are not the first thing to complicate it, featured snippets and knowledge panels did that years ago, but they are the most structurally significant shift yet.
The mechanism is straightforward. Google’s AI synthesises information from multiple sources and presents a direct answer at the top of the page. The sources cited appear as small reference links beneath the summary. Users who get a satisfactory answer from the AI block have no reason to scroll further, and many do not. This is zero-click behaviour at scale, and it is particularly pronounced on mobile, where the AI block can occupy most of the visible screen.
For publishers and brands who built traffic strategies around informational content, the exposure is real. The pages most at risk are those answering simple, definitive questions: definitions, how-to basics, comparison summaries. These are exactly the queries where AI Overviews appear most frequently and where the AI block is most likely to satisfy the user without a click.
The pages least at risk are those where intent is transactional, where the user needs to do something, buy something, book something, or where the content is genuinely proprietary, based on original data, first-hand experience, or a perspective that cannot be synthesised from existing sources.
If you want to go deeper on how AI is reshaping search and content strategy together, the AI Marketing hub covers the broader picture beyond just the search layer.
How Google Decides What Goes Into an AI Overview
This is where a lot of the current advice gets speculative. Nobody outside Google knows the exact weighting. What we can observe from the results, and from the pattern of which sources get cited, is that a few factors appear consistently.
First, structured clarity. Content that states its answer directly, early, and without burying it in preamble is more likely to be pulled. This is not new advice. It is the same logic that governed featured snippet optimisation. If your content takes three paragraphs to get to the point, the AI is likely to find a source that does not.
Second, topical authority. Google’s AI appears to favour sources that have demonstrated consistent, credible coverage of a subject area. A single well-written article on a topic you rarely cover is less likely to be cited than a body of work that signals genuine expertise. This is why thin content farms, even those that rank reasonably well, are being displaced by sources with stronger topical depth.
Third, E-E-A-T signals. The addition of the first E, for Experience, is significant here. Content that reflects first-hand knowledge, that cites specific situations, real decisions, and actual outcomes, reads differently to AI systems than content that is assembled from secondary sources. The Moz research on AI content quality points in this direction, with experience-driven content holding up better in AI-influenced search environments than purely synthetic outputs.
Fourth, technical hygiene. Schema markup, clean HTML structure, and logical heading hierarchies all make it easier for crawlers and AI systems to parse and attribute your content correctly. This has always mattered. It matters more now.
The Citation Problem Most Brands Are Ignoring
There is a version of AI Overviews success that looks like failure on a traffic dashboard. Your content gets cited. Users read the summary. Nobody clicks through. Impressions go up. Sessions go flat or decline.
I have seen this pattern with featured snippets for years. A client celebrates the snippet, then notices the click-through rate on that page has dropped. The visibility metric and the business metric diverge. AI Overviews will amplify this tension considerably.
The honest answer is that for some content types, citation without a click is still commercially valuable. Brand attribution, credibility signalling, and appearing as a trusted source in AI-generated answers all have value, particularly in categories where trust drives downstream purchase decisions. A financial services brand cited repeatedly in AI answers about investment basics is building something real, even if the immediate traffic conversion is low.
For other content types, particularly mid-funnel educational content that was previously converting readers into leads, the model needs rethinking. If the AI is answering the question that used to pull people into your funnel, you need to move the funnel entry point to content the AI cannot fully replicate. That means original research, proprietary data, tools, calculators, and content that requires interaction rather than just reading.
The Ahrefs webinar on AI SEO covers some of this territory well, including the distinction between content that AI will summarise and content that AI cannot replace.
What Good AI Overviews Optimisation Looks Like in Practice
I want to be specific here rather than abstract, because most of what is written about this topic stays at the level of principle without getting to execution.
Start with a content audit framed around query intent. Go through your highest-traffic informational pages and ask a simple question: if Google’s AI answered this query completely, would the user still need to visit my page? If the answer is no, that page is exposed. It may still rank, it may still get cited, but the traffic case for it is weakening.
For those pages, you have three options. First, accept the traffic decline and focus your effort elsewhere. Second, add a layer of value the AI cannot replicate: a downloadable asset, an interactive element, a deeper proprietary angle. Third, restructure the content to serve as a citation magnet rather than a traffic driver, optimising for brand attribution inside the AI block rather than click-through.
For new content, build with the AI layer in mind from the start. That means opening with a direct answer to the primary question, using clear H2 and H3 structure to separate sub-questions, writing in plain English with short declarative sentences, and including specific examples and data points that give the content credibility. The Moz guidance on AI-assisted content writing has useful practical framing here, particularly around how structure affects AI readability.
Schema markup is not optional anymore. FAQ schema, HowTo schema, and Article schema all give Google’s systems cleaner signals about what your content is and what question it answers. I have seen clients dismiss schema as a developer task with marginal returns. That view needs updating. The returns are not marginal when AI systems are using structured data to decide what to cite.
The Semrush overview of AI optimisation tools covers the technical side of this well if you want a tooling perspective alongside the strategic one.
The E-E-A-T Angle Is Not Soft, It Is Structural
When Google updated its quality guidelines to add Experience alongside Expertise, Authority, and Trust, a lot of content marketers treated it as a vague editorial note. It is not. It is a structural signal about what kind of content Google’s systems are being trained to prefer.
I think about this through a lens I have used for years in agency pitches. There are two types of marketing content: content that tells people what to think, and content that shows them what you know. The second type is harder to produce and harder to replicate. It is also the type that holds up better in an AI-mediated search environment, because it contains information that cannot be assembled from other sources.
When I launched a paid search campaign for a music festival at lastminute.com and watched six figures of revenue come in within roughly a day, that was a specific experience with a specific outcome. That kind of first-hand knowledge, translated into content, is exactly what E-E-A-T is rewarding. Not because it is impressive, but because it is attributable, specific, and cannot be fabricated by an AI scraping secondary sources.
For brands, this means getting the people with actual operational knowledge to contribute to content, not just the content team. It means citing internal data, even when it is imperfect. It means writing about what you have done, not just what people should do. The gap between those two things is where E-E-A-T lives.
How AI Overviews Are Changing Keyword Strategy
The conventional keyword strategy model prioritises search volume and competition. High volume, manageable competition, write the best page on the topic, build links, rank. That model is not broken, but it is incomplete for a world where AI Overviews intercept a significant share of high-volume informational queries.
The adjustment I would make is to layer in a third dimension: AI intercept likelihood. Queries that are short, factual, and have a clear single answer are high intercept. Queries that are longer, more nuanced, or require a decision rather than a fact are lower intercept. Allocating content investment accordingly is not giving up on informational content, it is being honest about where the traffic will actually land.
Long-tail queries with genuine specificity are holding up better. Not because AI cannot answer them, but because the specificity often means there are fewer high-quality sources to synthesise from, and the user is more likely to want to go deeper than a summary allows. This is where the Semrush AI content strategy framework has useful thinking, particularly around matching content depth to query complexity.
There is also a category of queries where AI Overviews create a genuine opportunity rather than a threat. If your competitors are not producing structured, authoritative content on a topic, and you are, the AI block may consistently cite you while ignoring them. That is a competitive advantage that compounds over time, particularly in industries where content quality is still low.
Early in my career, when I taught myself to code to build a website because the budget was not there, the underlying logic was the same: the constraint forces you to find the angle that others have not bothered with. Most brands are still producing content the way they did in 2019. That gap is exploitable.
Measuring the Right Things When AI Overviews Change the Metrics
One of my consistent frustrations in agency leadership was clients measuring the wrong things with great precision. Traffic as a proxy for performance. Rankings as a proxy for visibility. Both are useful. Neither is the point.
AI Overviews make this problem worse before it gets better. If your content is being cited in AI blocks but not generating clicks, your Google Search Console data will show impressions without sessions. Your organic traffic report will look flat or declining. Your ranking report may show strong positions. None of these individually tells you whether your AI Overviews presence is working commercially.
The measurement framework I would build starts with three questions. First, are we being cited? Track this manually or with tools that monitor AI Overview appearances for your target queries. Second, when we are cited, what is the click-through rate compared to pages where we are not cited in the AI block? Third, for the traffic that does arrive from cited pages, what is the downstream conversion behaviour compared to organic traffic from non-AI-intercept queries?
This gives you a clearer picture than aggregate traffic trends. It also gives you something actionable: if citation is high but click-through is low, the content is working for awareness but not for conversion, and you can make a deliberate choice about whether that is acceptable for that content type.
The Ahrefs AI tools webinar covers some of the practical measurement approaches available right now, including how to use existing tooling to track AI visibility alongside traditional rank data.
The Bigger Picture for Marketing Leaders
I have spent time as an Effie judge, which means I have seen behind the curtain of what marketing effectiveness actually looks like when it is measured rigorously. One of the consistent patterns is that the brands that hold up best through channel disruption are those that have built genuine audience relationships, not just traffic pipelines.
AI Overviews are a channel disruption. The brands most exposed are those whose relationship with their audience exists only through Google’s intermediation. If your content strategy is essentially “rank for queries, intercept intent, convert traffic,” you are operating a pipeline that Google is now partially rerouting.
The brands least exposed have direct relationships: email lists, communities, returning visitors, branded search volume that reflects genuine preference rather than just availability. These are not new ideas. They are ideas that AI Overviews make newly urgent.
That does not mean abandoning organic search strategy. It means building it as one layer in a broader audience strategy, not as the whole strategy. The technical work of optimising for AI citation is worth doing. The structural work of building an audience that does not depend entirely on Google’s goodwill is more worth doing.
There is more on how AI is reshaping the broader marketing stack, not just search, across the AI Marketing section of The Marketing Juice. If AI Overviews are your entry point into this topic, the wider picture is worth understanding alongside the search-specific mechanics.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
