LLMs Are Changing How People Find Content. Here Is What That Means for Marketers
LLMs like ChatGPT and Gemini are changing search behaviour in ways that most marketing teams have not fully accounted for yet. Instead of clicking through a list of results, a growing number of people are asking a question and getting an answer directly, often without visiting a single website. For marketers who built their content strategies around organic search traffic, that is a structural shift worth taking seriously.
This does not mean SEO is dead or that content marketing no longer works. It means the rules of content discovery are being rewritten, and the teams that understand the new mechanics will pull ahead of those still optimising for a model that is quietly becoming less dominant.
Key Takeaways
- LLMs are shifting content discovery away from click-through search toward zero-click, synthesised answers, which reduces referral traffic for many content types.
- Content that demonstrates genuine expertise, clear sourcing, and original perspective is more likely to be cited by LLMs than generic, keyword-stuffed pages.
- Measurement frameworks built around organic traffic alone will misread the impact of LLMs on brand visibility and content reach.
- Different LLMs have different retrieval behaviours: ChatGPT, Gemini, and Perplexity do not all surface content the same way, and treating them as identical is a mistake.
- The brands most at risk are those whose content strategy was built entirely around ranking, rather than around being genuinely useful or authoritative on a topic.
In This Article
- What Is Actually Changing in How People Search
- How ChatGPT, Gemini, and Perplexity Differ in Content Discovery
- What Content Gets Cited by LLMs and What Gets Ignored
- The Measurement Problem Nobody Is Talking About Loudly Enough
- What This Means for Your Content Strategy Right Now
- The Brands That Will Struggle and Why
- The Human Element That LLMs Cannot Replace
- What the Next 12 Months Will Clarify
What Is Actually Changing in How People Search
For most of the past two decades, search worked in a predictable way. Someone typed a query, got a list of links, clicked one or two, and landed on a page. The entire content marketing industry was built around that experience. Traffic was the proxy for reach, and ranking was the proxy for relevance.
LLMs break that model at the point of delivery. When someone asks ChatGPT how to reduce customer acquisition cost or asks Gemini to compare two project management tools, they often get a complete, synthesised answer without ever leaving the interface. The underlying content may have informed that answer, but the user never visits the source. The website gets no session, no pageview, no conversion opportunity.
This is not a hypothetical future state. It is happening now, and the volume is growing. Google’s own AI Overviews, which appear at the top of many search results pages, operate on the same principle: synthesise, summarise, and reduce the need to click. The search engine that built the content marketing industry is now one of the forces eroding the traffic model it created.
I spent years at agencies where organic search was a core channel, and the entire measurement architecture was built around sessions and rankings. What this shift exposes is how fragile that architecture was. Traffic was always a proxy metric. The question was always whether the content was actually influencing decisions. LLMs are forcing that question into the open.
If you want to understand the broader context of how AI is reshaping marketing practice, the AI Marketing hub at The Marketing Juice covers the full landscape, from tools and workflows to strategic implications for teams of all sizes.
How ChatGPT, Gemini, and Perplexity Differ in Content Discovery
One mistake I see marketing teams make is treating all LLMs as interchangeable. They are not. Each has a different retrieval architecture, a different relationship with live web data, and a different approach to citing sources. That matters if you are trying to understand how your content gets surfaced, or why it does not.
ChatGPT, in its default form, draws primarily on its training data. The browsing-enabled version can pull live web content, but users have to opt into that mode, and the retrieval behaviour is not identical to a traditional search crawl. Gemini, built by Google, has deeper integration with live search and is more likely to surface recently indexed content. Perplexity is explicitly designed as a search replacement and shows citations prominently, which makes it the most transparent of the three in terms of where its answers come from.
HubSpot has done useful work on comparing LLMs for marketing use cases, which is worth reading if you are trying to make practical decisions about which tools your team should be using. But for content discovery specifically, the more relevant question is not which LLM your team uses, it is which LLMs your audience uses when they are looking for information in your category.
That is a research question most marketing teams have not asked yet. It should be on the agenda.
What Content Gets Cited by LLMs and What Gets Ignored
LLMs do not rank content the way search engines do. There is no position one. But they do exhibit preferences, and those preferences tend to reward content that is authoritative, well-structured, and genuinely informative rather than content that is optimised for keyword density or structured around search intent in the traditional sense.
From what practitioners and researchers have observed, content that tends to get cited or incorporated into LLM responses shares a few characteristics. It tends to be specific rather than generic. It tends to come from sources with established credibility in a topic area. It tends to be structured in ways that are easy to parse, with clear headings, defined terms, and direct answers to questions. And it tends to contain original perspective or data rather than rehashing what is already widely available.
That last point is worth dwelling on. When I was judging the Effie Awards, one of the things that separated the entries that won from the ones that did not was the presence of genuine insight. Not a restatement of what the category already knew, but something that shifted how you understood the problem. The same principle applies here. LLMs are trained on enormous volumes of generic content. The content that stands out is the content that adds something the model cannot easily synthesise from a hundred other sources.
Moz has published practical thinking on how AI tools are changing content production and discovery workflows, which is useful context for teams trying to adapt their approach. The direction of travel is clear: volume-based content strategies built on thin, derivative material are becoming less viable, not more.
The Measurement Problem Nobody Is Talking About Loudly Enough
Here is the uncomfortable truth for anyone running a content programme right now. If your brand is being cited in LLM responses, you may be getting meaningful exposure and influence without seeing any of it in your analytics. No sessions. No pageviews. No attribution. Just invisible reach.
I have been making the case for honest measurement for most of my career. When I was running agencies, I saw how much of the marketing measurement infrastructure was built to tell clients what they wanted to hear rather than what was true. Last-click attribution, vanity metrics, traffic reports that showed activity without showing impact. The LLM shift is making that problem worse, because now there is a growing channel of influence that sits entirely outside the measurement stack.
The teams that will handle this well are the ones that build measurement frameworks around business outcomes rather than channel metrics. If your content is influencing purchasing decisions or brand perception, that should show up somewhere in the business data, even if it does not show up in Google Analytics. That requires connecting content activity to commercial outcomes in a more disciplined way than most teams currently do.
Semrush has written thoughtfully about how AI is reshaping content strategy, including the measurement implications. The honest answer is that the industry does not yet have clean solutions to this problem. But acknowledging the gap is better than pretending the old metrics still tell the full story.
What This Means for Your Content Strategy Right Now
The practical question is what to do differently. I want to be specific here rather than vague, because vague advice about “creating better content” is not useful when you are trying to make decisions about where to invest time and budget.
First, audit what your content is actually doing. Not just ranking and traffic, but whether it is genuinely the best available answer to the questions your audience is asking. If it is not, an LLM will not cite it, and a sophisticated reader will not trust it either. The bar has moved. Generic content that ranks because of domain authority alone is increasingly vulnerable.
Second, think about entity authority rather than just keyword authority. LLMs are more likely to surface content from sources they recognise as credible on a specific topic. That means building a consistent, recognisable point of view on the subjects that matter most to your business, rather than spreading thin across every tangentially related keyword cluster.
Third, consider what original data or perspective you can bring to your content. Proprietary research, client case studies with real numbers, analysis that reflects genuine expertise rather than synthesised summaries of what others have already published. This is harder to produce, but it is also harder for an LLM to replicate from training data alone.
Fourth, do not abandon SEO. Traditional search is still a significant channel, and the fundamentals of good SEO, clear structure, genuine relevance, technical hygiene, are also the fundamentals of content that performs well in LLM retrieval. Ahrefs has covered the intersection of AI and SEO in practical detail, and it is worth spending time with if you are trying to understand where the two disciplines converge.
Fifth, start monitoring your brand’s presence in LLM responses. This is still a manual process for most teams, but it is worth doing. Ask ChatGPT, Gemini, and Perplexity questions that your customers are likely to ask. See whether your brand appears, and if so, how it is characterised. That qualitative signal is more useful right now than waiting for the measurement tools to catch up.
The Brands That Will Struggle and Why
Not every business is equally exposed to this shift. The brands most at risk are those whose content strategy was built almost entirely around capturing organic search traffic through volume. If your competitive advantage was producing a lot of content quickly and ranking for long-tail keywords, that model is under pressure from multiple directions: AI Overviews reducing click-through rates, LLMs answering questions directly, and the general raising of the bar for what counts as useful content.
I have seen this pattern before in different contexts. When I was helping turn around a loss-making agency, one of the first things I noticed was how much of the activity looked productive without actually driving business outcomes. Content volume was one of those areas. The team was producing a lot, the traffic numbers looked reasonable, but the connection to revenue was tenuous at best. The LLM shift is a forcing function for the same kind of honest appraisal.
Businesses with strong brand authority, genuine expertise in a defined area, and content that reflects real experience are better positioned. Not immune, but better positioned. The shift rewards depth over breadth, and credibility over optimisation.
Semrush has also published useful guidance on using AI optimisation tools to improve content strategy, which covers some of the practical levers teams can pull. The tools are useful, but they are not a substitute for the strategic clarity about what you are trying to achieve and why.
The Human Element That LLMs Cannot Replace
There is a version of this conversation that ends with “AI is going to replace content marketing.” I do not think that is right, and I want to explain why without being naively optimistic about it.
LLMs are very good at synthesising existing information. They are not good at generating genuine insight from lived experience, at taking a position based on judgment rather than pattern matching, or at producing the kind of specific, credible, first-person perspective that builds real trust with an audience. Those things require humans, and they are also the things that differentiate content that gets cited and trusted from content that gets ignored.
The early days of paid search felt similar in some ways. When I launched a campaign for a music festival at lastminute.com and saw six figures of revenue come in within a day, it was not because the technology was doing something clever. It was because someone had made a good judgment about what the audience wanted and when they wanted it. The technology was the delivery mechanism. The thinking was still human.
Content that reflects genuine expertise, honest perspective, and real commercial experience is not going to be displaced by LLMs. It is going to become more valuable because of them, precisely because it is harder to replicate. Mailchimp has written useful practical guidance on how to humanise AI-generated content, which is relevant for teams trying to maintain that quality signal even when using AI in the production process.
The Ahrefs webinar series on AI tools for marketers is also worth bookmarking for practical, grounded thinking on how these tools fit into real workflows without replacing the judgment that makes content worth reading in the first place.
What the Next 12 Months Will Clarify
We are still in an early phase of this transition. The measurement tools are lagging. The LLM retrieval behaviours are still evolving. Google is still figuring out how aggressively to deploy AI Overviews across different query types. ChatGPT and Gemini are adding new capabilities on a regular cadence. The picture will look different in 12 months than it does today.
What will not change is the underlying principle: content that is genuinely useful, credibly authored, and clearly structured around real audience needs will outperform content that is not. That was true before LLMs, and it will be true after the current wave of disruption settles into a new normal.
The teams that will be in the best position are those that use this moment to do the audit they probably should have done two years ago. Which content is actually driving business outcomes? Which pieces reflect genuine expertise? Where is the gap between what you publish and what your audience actually needs? Those questions are worth answering regardless of what any individual LLM does next.
There is more on how AI is reshaping content strategy and marketing operations across the AI Marketing section of The Marketing Juice, including practical coverage of tools, workflows, and the strategic questions that matter most for marketing teams right now.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
