Gen AI SEO: How Search Is Changing and What to Do Now
Gen AI SEO is the practice of optimising content so it surfaces in AI-generated answers, not just traditional search results. As tools like ChatGPT, Gemini, and Perplexity pull answers directly from the web, the question of how your content gets selected, cited, and trusted has become as commercially important as ranking on page one.
The mechanics are shifting. The fundamentals are not. What changes is where the answer appears and how it gets assembled. What stays constant is whether your content is genuinely useful, clearly structured, and credible enough to be cited.
Key Takeaways
- AI-generated answers are pulling traffic away from traditional organic clicks, but sites with strong topical authority are holding up better than those chasing volume alone.
- Structured, specific, and citable content performs better in AI overviews than thin content optimised purely for keyword density.
- E-E-A-T signals, schema markup, and clean site architecture matter more now, not less, because AI systems use them as credibility signals.
- Optimising for AI search and optimising for traditional search are not separate strategies. The same quality standards underpin both.
- The biggest risk right now is reacting to AI search with volume plays. Publishing faster and thinner is the wrong response to a system that rewards depth and trust.
In This Article
- What Is Gen AI SEO and Why Does It Matter Now?
- How AI Search Systems Select and Cite Content
- E-E-A-T: The Credibility Framework That AI Systems Use
- Content Structure for AI Search: What Actually Works
- Technical SEO Still Matters: Site Architecture and Crawlability
- The Topical Authority Model and Why It Matters More Now
- What to Stop Doing: The Gen AI SEO Mistakes Worth Avoiding
- A Practical Gen AI SEO Audit: Where to Start
What Is Gen AI SEO and Why Does It Matter Now?
For most of the last decade, SEO was a fairly legible game. You identified what people searched for, built content around it, earned links, and tracked rankings. The feedback loop was slow but comprehensible. Then generative AI arrived at the top of the search results page, and a meaningful portion of that feedback loop got disrupted.
Google’s AI Overviews now appear for a wide range of informational queries. Perplexity is growing as a research tool. ChatGPT has a browsing mode. Gemini is embedded into Google’s core interface. These systems do not just rank pages. They read them, synthesise them, and produce a single answer, sometimes with citations, sometimes without. The user gets what they need without clicking through.
The commercial implication is real. Semrush’s research into AI search and SEO traffic found measurable click-through reductions for queries where AI Overviews appear. That is not a reason to panic. It is a reason to understand what kind of content still gets clicked, still gets cited, and still builds the kind of authority that AI systems rely on when they construct their answers.
I have spent a lot of time over the past year looking at this from a commercial lens rather than a technical one. The marketers I see struggling with AI search are mostly the ones who built their SEO programmes around volume: produce more, publish faster, target more keywords. That approach was already showing diminishing returns before AI search arrived. Now it is actively counterproductive.
How AI Search Systems Select and Cite Content
Understanding how AI systems choose what to include in a generated answer is the foundation of any sensible gen AI SEO strategy. These systems are not random. They are trained to prioritise content that is authoritative, well-structured, and directly responsive to the query.
A few patterns are emerging from observation and from what Google has published about how AI Overviews work. First, pages that already rank well in traditional search tend to appear in AI Overviews more often. This is not a coincidence. The signals that make a page rank, credibility, relevance, clear structure, are the same signals AI systems use to assess whether a page is worth citing. Semrush’s analysis of SEO traffic generation reinforces the point: authority and relevance remain the core variables.
Second, content that answers a specific question directly, in plain language, near the top of the page, is more likely to be pulled into an AI-generated answer. This is the same logic behind featured snippets. If you structure your content so a human reader can find the answer in ten seconds, an AI system can too.
Third, schema markup helps. It does not guarantee inclusion, but it gives AI systems and search engines structured signals about what your content is about, who wrote it, and what it covers. For a deeper look at how this fits into a broader SEO framework, the Complete SEO Strategy hub covers the foundational elements that support both traditional and AI search performance.
E-E-A-T: The Credibility Framework That AI Systems Use
Google’s E-E-A-T framework, Experience, Expertise, Authoritativeness, and Trustworthiness, was designed to help human quality raters assess content. It has become more commercially relevant as AI systems have been trained on similar quality signals.
Experience is the newest addition to the framework and the one most often underused. It asks whether the person writing the content has direct, first-hand experience with the topic. This is not about credentials. It is about demonstrating that you have actually done the thing you are writing about. An article on managing agency growth written by someone who has managed agency growth reads differently from one assembled from secondary sources. AI systems are increasingly good at distinguishing between the two.
When I was running iProspect and we were scaling from 20 to over 100 people, I was not thinking about E-E-A-T. But the principle was already true: clients trusted our advice because we had done the work across hundreds of campaigns and dozens of industries, not because we had read about it. That direct experience is what makes content worth citing, whether the reader is a human or a language model.
Expertise and authoritativeness are built over time through consistent, accurate, specific content on a defined topic set. Trustworthiness is supported by technical signals: HTTPS, accurate author information, clear sourcing, and schema markup that confirms who you are and what you publish. The Ahrefs webinar on AI and SEO covers how these signals interact with AI search systems in practical terms.
Content Structure for AI Search: What Actually Works
There is a version of gen AI SEO advice that tells you to write for AI systems as though they are a different audience from your human readers. That framing is wrong. The content structures that perform well in AI search are the same ones that perform well with informed human readers: clear hierarchy, direct answers, specific examples, and minimal padding.
A few structural principles are worth building into your content process.
Answer the question early. If your article is about a specific topic, the first two or three paragraphs should contain a direct, complete answer to the primary question. This serves featured snippets, AI Overviews, and readers who are time-poor. Everything that follows can add depth, nuance, and context.
Use H2 and H3 headers as questions. AI systems use headers to understand the structure of a page and the scope of what it covers. Headers phrased as questions map directly to the kinds of queries people type or speak. They also make your content more scannable for human readers.
Be specific rather than comprehensive. One of the patterns I noticed when judging the Effie Awards was that the most effective campaigns were almost always built around a single, sharp insight rather than a broad strategic framework. The same principle applies to content. A piece that answers one question with genuine depth and specificity outperforms a piece that covers ten questions at surface level, in both traditional search and AI search.
Use structured data. FAQ schema, Article schema, and HowTo schema all give AI systems and search engines machine-readable signals about what your content contains. Search Engine Land’s breakdown of common SEO errors includes the failure to use structured data as a recurring theme in underperforming sites.
Cite sources clearly. AI systems are trained to prefer content that references credible external sources. This is not about link building. It is about demonstrating that your content is grounded in something beyond your own opinion. Where you have data, name it. Where you have a source, link to it.
Technical SEO Still Matters: Site Architecture and Crawlability
There is a tendency in AI search discussions to focus entirely on content quality and ignore the technical layer. That is a mistake. AI systems rely on search engine indices to find and evaluate content. If your site has crawlability problems, duplicate content issues, or a confusing architecture, the content quality question becomes moot.
I have seen this play out in practice. During a turnaround engagement with a mid-size retail business, we found that a significant portion of their best content was effectively invisible to search engines because of a poorly structured subdomain setup. The content was good. The architecture undermined it completely. Moz’s analysis of subfolder versus subdomain decisions is a useful reference for anyone working through similar architecture questions.
For businesses managing multiple sites or considering domain consolidation, the domain structure decision has downstream effects on how AI systems assess authority. A single, well-organised domain with clear topical depth tends to perform better than authority spread across multiple properties. Moz’s Whiteboard Friday on the one domain versus many decision covers this in detail.
Core technical requirements for AI search are not different from traditional SEO requirements. Fast load times, clean crawl paths, canonical tags used correctly, and XML sitemaps that accurately reflect your content inventory. What has changed is the margin for error. When AI systems are assessing whether to cite your content, a site with technical problems is competing against sites without them. The technical baseline is a filter, not a differentiator.
The Topical Authority Model and Why It Matters More Now
Topical authority is the idea that a site covering a specific subject area with depth and consistency is treated as more credible than a site covering many subjects superficially. This is not a new concept. It has been part of how Google evaluates sites for years. AI search has made it more important.
When an AI system constructs an answer, it is drawing from sources it has assessed as credible on that specific topic. A site that has published fifty well-researched articles on a defined subject area is a more reliable source than a site that has published five hundred articles across thirty unrelated topics. The depth signal matters.
Building topical authority requires a content strategy built around a defined set of subjects, not a keyword list. The difference is meaningful. A keyword list tells you what people search for. A topical map tells you what you need to cover to be considered authoritative on a subject. For a site in the insurance space, for example, Ahrefs’ SEO framework for insurance agents illustrates how topical depth works in a competitive, high-trust category.
The practical implication is that most content programmes need to be narrowed, not expanded. I have worked with marketing teams that were producing content at high volume across a wide range of topics, none of them covered with sufficient depth to establish authority. The instinct when traffic drops is to publish more. The right response is almost always to publish better, on fewer subjects, with more depth.
What to Stop Doing: The Gen AI SEO Mistakes Worth Avoiding
There are several responses to AI search that are understandable but counterproductive. It is worth naming them directly.
Publishing AI-generated content at scale. The irony of using generative AI to produce content for AI search is not lost on anyone paying attention. AI-generated content that has not been substantively edited, enriched with genuine expertise, or grounded in first-hand experience is exactly the kind of content that performs poorly in AI search. It lacks the specificity, the credibility signals, and the direct experience that AI systems are trained to prefer. The volume play is a race to the bottom.
Treating AI search as a separate channel. I have seen agencies pitch “AI search optimisation” as a distinct service, separate from SEO. In most cases, this is a repackaging exercise. The foundations are the same. What changes is the emphasis: more structured data, more direct answers, more topical depth. If your SEO programme is already well-run, you are most of the way there.
Ignoring the traffic measurement problem. AI Overviews and AI-generated answers do not always pass referral traffic in ways that are easy to track. Zero-click outcomes are real. If you are measuring the success of your content programme purely by organic click volume, you are missing part of the picture. Brand search, direct traffic, and assisted conversions all need to be part of the measurement framework. Analytics tools are a perspective on reality, not reality itself, and that has never been more true than in the current search environment.
Waiting for the dust to settle. AI search is not a temporary disruption. The direction of travel is clear. Waiting for a stable set of rules before acting is a way of ceding ground to competitors who are building topical authority and content quality now.
The broader SEO strategy context for all of this is covered in the Complete SEO Strategy hub, which addresses how these individual components fit into a coherent, commercially grounded programme.
A Practical Gen AI SEO Audit: Where to Start
If you are looking at your current SEO programme and trying to assess how well it is positioned for AI search, a few diagnostic questions will tell you most of what you need to know.
First, does your content answer specific questions directly? Take your ten most important pages and check whether the primary question is answered in the first two paragraphs. If it is buried under context and background, restructure it.
Second, is your schema markup complete and accurate? Article schema, FAQ schema, and author markup should be present across your key content. If they are missing or incomplete, this is a quick technical fix with meaningful upside.
Third, is your author information credible and verifiable? AI systems and Google’s quality assessment both weight author credibility. Named authors with verifiable credentials, linked author pages, and consistent publishing histories perform better than anonymous or generic bylines.
Fourth, is your site architecture clean? Crawl your site with a tool like Screaming Frog and look for orphaned pages, duplicate content, and broken internal links. These are not AI-specific problems, but they become more costly as AI systems use site quality as a credibility filter.
Fifth, is your content programme focused on a defined topic set? If your content covers everything, it is authoritative on nothing. Map your existing content against a defined topical model and identify the gaps and the noise.
None of this is complex. Most of it is a return to fundamentals that were always true but are now more consequential. The marketers who will perform well in AI search are the ones who build content programmes around genuine expertise, clear structure, and commercial purpose, not the ones chasing the latest technical hack.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
