Mad SEO: When Complexity Becomes the Strategy
Mad SEO is what happens when SEO stops being a means to an end and becomes an end in itself. Teams accumulate tactics, tools, dashboards, and frameworks until the original objective, getting qualified traffic that converts, gets buried under the weight of the programme. It is one of the more common ways a well-intentioned SEO investment quietly stops working.
The antidote is not a simpler checklist. It is a harder question: what is this SEO programme actually for, and is the current complexity serving that goal or obscuring it?
Key Takeaways
- SEO complexity tends to grow faster than SEO results, and the gap between the two is where budget quietly disappears.
- Most SEO programmes carry at least three layers of activity that made sense at launch but have never been reviewed since.
- The highest-leverage SEO work is rarely the most technically sophisticated. It is usually the most commercially coherent.
- Measuring SEO by rankings and traffic volume, rather than by pipeline contribution, is how programmes survive without delivering.
- Simplifying an SEO programme is harder than building a complicated one, and it is almost always more valuable.
In This Article
- How SEO Programmes Get Mad in the First Place
- The Complexity Trap: More Signals, Fewer Results
- What Mad SEO Looks Like in Practice
- The Commercial Reset: What SEO Is Actually For
- How AI Has Made Mad SEO More Accessible
- The Simplification Playbook
- Reporting That Keeps Programmes Honest
- When to Call It and Start Again
How SEO Programmes Get Mad in the First Place
I have sat in enough agency reviews and client strategy sessions to recognise the pattern. An SEO programme starts with a clear brief: rank for these terms, drive this type of traffic, support these commercial goals. The first six months are disciplined. Then something shifts.
A competitor appears in a new keyword cluster. A Google update triggers a reactive content sprint. Someone reads a Whiteboard Friday and adds four new deliverables to the retainer. The tool stack expands. The reporting deck grows from eight slides to thirty. A new team member joins and brings their own methodology. Within eighteen months, the programme has doubled in scope and the original commercial objective has been replaced by a set of proxy metrics that everyone tracks but nobody can connect to revenue.
This is not a failure of ambition. It is a failure of governance. SEO is particularly vulnerable to it because the discipline is genuinely broad, the goalposts move constantly, and there is always a legitimate-sounding reason to add something new. The industry does not help. There is a near-infinite supply of tactical content encouraging teams to do more, layer in more signals, build more content, pursue more links. Very little of it asks whether the existing programme is working before recommending the next addition.
When I was running iProspect and growing the team from around twenty people to over a hundred, one of the disciplines I tried to hold onto was the distinction between activity and output. Activity is easy to generate and easy to report. Output, meaning commercial results that a client CFO would recognise as valuable, is harder to produce and harder to measure honestly. The programmes that stayed sharp were the ones where someone in the room kept asking what all this activity was actually delivering. The ones that went mad were the ones where nobody did.
If you are building or reviewing an SEO strategy, the full framework is covered in the Complete SEO Strategy hub, which pulls together the components that tend to matter most for commercially grounded programmes.
The Complexity Trap: More Signals, Fewer Results
There is a point in any SEO programme where adding complexity stops improving results and starts degrading them. Not because the individual tactics are wrong, but because the team’s attention is now spread across too many things to execute any of them well.
I have seen this most clearly in content programmes. A team decides to build topical authority across a broad keyword universe. They map hundreds of topics, assign writers, and begin producing. Six months in, they have a large volume of content that ranks for almost nothing, because no single topic area has enough depth or internal coherence to signal genuine authority to Google. The strategy was sound on paper. The execution was diluted by scale.
The same dynamic plays out in technical SEO. Teams accumulate audit findings over time, each one technically valid, and end up with a backlog of two hundred items that nobody has the development resource to address. The programme becomes defined by its backlog rather than its impact. Engineers deprioritise SEO requests because the list never gets shorter. The team reports on issues found rather than issues fixed.
Link building programmes do it too. Outreach lists grow, domain targets multiply, and the team spends more time managing a complex prospecting operation than actually securing links that move the needle. The relationship between usability and SEO performance is one example of where teams often overcomplicate the picture, building elaborate link strategies while ignoring on-page factors that a development sprint could fix in a week.
Complexity in marketing has a way of delivering diminishing returns long before anyone notices. The warning sign is usually a reporting deck that takes longer to prepare than it takes to read, covering metrics that are interesting but not actionable. When that becomes normal, the programme has probably already gone mad.
What Mad SEO Looks Like in Practice
It is worth being specific about the symptoms, because they are easy to rationalise individually even when they are collectively damaging.
The first symptom is metric proliferation. The programme tracks rankings, organic sessions, click-through rates, impressions, crawl coverage, Core Web Vitals, domain authority, referring domains, content scores from a third-party tool, and a proprietary visibility index the agency invented. Each metric has a legitimate reason to exist. Together, they create a reporting environment where it is impossible to say clearly whether the programme is working. When I judged the Effie Awards, the entries that stood out were the ones that could articulate their impact in three numbers or fewer. SEO programmes that need thirty metrics to tell their story usually cannot tell it at all.
The second symptom is strategy by reaction. Every Google update, competitor move, or industry trend triggers a new workstream. The programme never has time to execute a coherent plan because it is always pivoting to the latest signal. This is how teams end up with half-built content clusters, abandoned link campaigns, and technical fixes that were started but never completed.
The third symptom is tool dependency without tool literacy. The programme runs on five or six platforms, each producing its own data, and nobody in the team can explain why the numbers sometimes contradict each other. Decisions get made based on whichever tool’s output is most convenient rather than which is most accurate. Analytics tools are a perspective on reality, not reality itself, and an SEO programme that has forgotten that distinction is flying blind with expensive instruments.
The fourth symptom is the absence of a kill list. Healthy programmes add things and remove things. Mad programmes only add. Old content that drives no traffic and serves no commercial purpose stays live because nobody has time to audit it. Link building tactics that stopped working two years ago remain on the delivery schedule because they were always on the delivery schedule. The programme grows heavier every quarter without growing more effective.
The Commercial Reset: What SEO Is Actually For
The most useful thing you can do with an SEO programme that has gone mad is stop adding to it and start asking what it is for.
This sounds obvious. It rarely happens in practice because the question requires honesty about what the programme is currently delivering, and that honesty is uncomfortable when the answer is less than the investment warrants. I have been in client meetings where the SEO team could tell you the domain authority to one decimal place but could not say how many leads or sales the channel had contributed in the last quarter. That is a programme that has confused its inputs for its outputs.
A commercial reset starts with three questions. First, what business outcomes is this programme supposed to support? Not traffic targets or ranking goals, but actual business outcomes: leads, revenue, customer acquisition, brand visibility in specific markets. Second, which parts of the current programme have a demonstrable connection to those outcomes? Not a theoretical connection, a demonstrable one. Third, what would you cut if the budget were halved tomorrow?
The third question is the most useful. It forces prioritisation that polite quarterly reviews never do. In my experience, most SEO programmes could cut 30 to 40 percent of their activity without reducing their commercial impact, because a significant portion of what they do is either legacy activity that made sense once, or complexity that was added to justify a retainer rather than to drive results.
Good copywriting and content strategy share this discipline. Forrester’s primer on writing great copy makes the point that clarity of purpose precedes quality of execution. The same applies to SEO. You cannot write good content for a programme that does not know what it is trying to achieve, and you cannot build a coherent link strategy if the target keyword set changes every time someone reads a new piece of industry commentary.
How AI Has Made Mad SEO More Accessible
Generative AI has done something interesting to SEO complexity. It has made the expensive parts cheap. Content production at scale, which used to require significant resource and therefore acted as a natural constraint, can now be done at volume with minimal friction. That sounds like a good thing. In practice, it has accelerated the mad SEO dynamic considerably.
Teams that previously published twenty articles a month are now publishing two hundred. The keyword coverage looks impressive in a spreadsheet. The commercial impact is often no better, and sometimes worse, because the content is thinner, less differentiated, and increasingly indistinguishable from the AI-generated content that every other site in the category is also producing.
The Moz analysis of generative AI for SEO content is worth reading on this point. The argument is not that AI content is bad. It is that AI content without editorial judgment and strategic coherence tends to produce volume without value. That is a precise description of mad SEO applied to content: more of something that was not working well enough to begin with.
The constraint that AI removes is not the constraint that was limiting SEO performance. Most programmes are not limited by content volume. They are limited by content quality, topical coherence, and the strength of the commercial proposition they are built around. AI does not fix any of those things. It just makes it faster and cheaper to produce content that does not fix them either.
I am not anti-AI in SEO. I use it and I think it has genuine utility in research, drafting, and scaling certain types of content production. But it needs to be applied within a programme that knows what it is for. Giving AI tools to a programme that has already gone mad is like giving a faster car to someone who is driving in the wrong direction.
The Simplification Playbook
Simplifying an SEO programme is genuinely difficult. Not because the work is technically complex, but because it requires saying no to things that have internal advocates, cutting activity that someone spent time building, and accepting that a smaller, more focused programme might outperform a larger, more complicated one. That is a hard sell in most organisations.
The approach I have found most useful is to start with the output layer and work backwards. Begin with the commercial outcomes the programme is supposed to drive. Then identify which keyword clusters have a credible path to those outcomes, meaning real search volume, achievable competitive position, and genuine relevance to what the business sells. Everything outside those clusters is either a future phase or a cut.
Content infrastructure matters here more than most teams acknowledge. Getting content management infrastructure right is a prerequisite for executing a focused content strategy at any scale. If the CMS makes it hard to update, redirect, or retire content efficiently, the programme will accumulate dead weight faster than any simplification effort can remove it.
On the technical side, the simplification principle is to fix what blocks crawling and indexing before optimising anything else. A clean technical foundation with modest content often outperforms a technically broken site with extensive content. I have seen this repeatedly when taking over programmes that had prioritised content volume over technical health. The first three months of fixing fundamentals typically delivers more ranking improvement than the previous year of content production.
For links, the simplification principle is quality over portfolio size. Ten links from genuinely relevant, authoritative sites in your category will do more than a hundred links from a diverse but undifferentiated outreach programme. The latter is what mad SEO produces. The former requires editorial judgment about where your brand actually belongs in the web’s link graph.
Community and earned visibility are worth considering as part of a simplified programme too. The SEO benefits of community engagement are real and tend to compound over time in ways that transactional link building does not. A brand that is genuinely present in its category’s online conversations attracts links, mentions, and search demand more naturally than one that is executing a link acquisition programme in isolation.
Reporting That Keeps Programmes Honest
One of the most effective structural changes you can make to an SEO programme is to change what you report on and who you report to.
Most SEO reporting is designed to demonstrate activity. It shows what the team did, what changed in the rankings, what the traffic numbers look like. This is fine as an operational dashboard. It is a poor basis for strategic decisions because it does not connect programme activity to business outcomes.
Reporting that keeps programmes honest connects organic traffic to pipeline. It shows which keyword clusters are driving qualified visitors, what those visitors do on the site, and how many of them convert into the outcomes the business cares about. This requires integration between SEO data and CRM or conversion data, which is technically straightforward but organisationally uncommon because it requires SEO to be accountable in a way that pure traffic reporting does not.
The audience for this reporting also matters. SEO reports that go only to the marketing team stay in a loop where the metrics that matter are the ones the marketing team controls. SEO reports that go to commercial leadership, even in summary form, create accountability for the outcomes that the business actually cares about. In my experience, programmes that report to commercial leadership tend to stay more focused and more honest than those that report only within the marketing function.
The broader context for all of this sits in the Complete SEO Strategy framework, which covers how to structure an SEO programme that stays commercially coherent as it scales, rather than accumulating complexity for its own sake.
When to Call It and Start Again
Sometimes the right answer is not simplification but reset. A programme that has been running for three or more years, has changed agency or team multiple times, and has accumulated enough legacy activity that nobody can explain why half of it exists, is often better rebuilt than reformed.
This is a hard conversation to have because it implies that the previous investment was not well spent. In practice, it usually means the programme was well-intentioned but poorly governed, and that the cost of carrying forward its complexity exceeds the cost of starting clean. I have had this conversation with clients and it is always uncomfortable. It is also usually correct.
A reset does not mean abandoning existing rankings or content that is performing. It means auditing what exists, keeping what works, retiring what does not, and rebuilding the programme’s logic from commercial objectives rather than from the accumulated decisions of previous teams and agencies.
BCG’s work on lessons from decades of organisational change makes a point that applies directly here: organisations that successfully simplify tend to do so by being explicit about what they are stopping, not just what they are starting. SEO programmes that reset well are the ones that document the cut as clearly as they document the plan. Without that, the complexity tends to creep back in within a year.
Mad SEO is not an indictment of anyone’s intentions. It is a structural outcome of a discipline that rewards activity, operates in a constantly changing environment, and is rarely governed with the same commercial rigour as paid media or sales. The fix is not more sophistication. It is more honesty about what the programme is for and whether what it is doing serves that purpose.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
