AI Strategy Consulting Is Mostly Theatre. Here Is What Good Looks Like
AI strategy consulting is a service that helps organisations assess where artificial intelligence can create genuine commercial value, identify the gaps in capability and process that would prevent it, and build a roadmap for implementation that holds up when it meets real operational constraints. Done well, it is one of the more useful engagements a marketing team can commission right now. Done badly, it produces a slide deck full of use cases the client could have found on Google, a technology vendor shortlist, and a day rate that would make a CFO wince.
Most of what is being sold under this label falls into the second category.
Key Takeaways
- AI strategy consulting only earns its fee when it starts with a specific commercial problem, not with a technology inventory or a trend briefing.
- The most common failure mode is confusing AI adoption with AI strategy. Deploying tools is not a strategy. Knowing why you are deploying them, and what you will measure, is.
- A credible AI consultant will spend more time on your data infrastructure and team capability gaps than on tool recommendations. If they lead with tools, that is a signal.
- The organisations getting the most from AI right now are not the ones with the biggest budgets or the most sophisticated tech stacks. They are the ones with the clearest briefs.
- Most AI strategy engagements fail at the implementation stage, not the planning stage. The roadmap is rarely the problem. Ownership and change management almost always are.
In This Article
- What Is AI Strategy Consulting, Actually?
- Why Most AI Strategy Engagements Disappoint
- What a Credible AI Strategy Engagement Looks Like
- How to Evaluate an AI Strategy Consultant Before You Hire One
- Where AI Strategy Creates Real Commercial Value in Marketing
- The Internal Capability Question Most Organisations Get Wrong
- Build vs. Buy vs. Advise: Choosing the Right Model
I have been in marketing long enough to have watched several technology waves arrive with the same pattern. A new capability emerges. Vendors build products around it. Consultants build practices around those products. Procurement teams start asking for it in briefs without being entirely sure what they are asking for. Budgets get allocated. Results are mixed. The post-mortems, when they happen at all, tend to attribute failure to implementation rather than to the underlying premise. AI is following this pattern almost exactly, which is worth understanding before you spend serious money on advice.
What Is AI Strategy Consulting, Actually?
Strip away the positioning language and AI strategy consulting should do three things. It should tell you where AI can create measurable value in your specific business context. It should tell you what you need to have in place before that value is accessible. And it should give you a sequenced plan for getting there that is honest about cost, risk, and the organisational changes required.
That sounds straightforward. The reason it rarely happens that cleanly is that most consultants are better at the first part than the second and third. Identifying potential use cases is relatively easy. Anyone with a working knowledge of large language models, computer vision, and predictive analytics can map those capabilities against a marketing function and produce a plausible list of applications. The harder work is the diagnostic: understanding what your data actually looks like, what your team can realistically absorb, what your existing tech stack will and will not support, and what the opportunity cost of pursuing AI in one area is relative to another.
When I was running agencies, the engagements that created the most value were almost never the ones that started with the biggest ambitions. They were the ones that started with the most honest assessment of the current state. A client who knew exactly what was broken and why was infinitely easier to help than one who had a vague sense that they were falling behind and wanted someone to fix that feeling.
If you want a broader grounding in how AI is reshaping marketing practice before commissioning any consultancy, the AI Marketing hub covers the landscape from first principles, including where the technology genuinely adds value and where the hype outruns the evidence.
Why Most AI Strategy Engagements Disappoint
The failure modes are consistent enough that they are worth naming directly.
The first is starting with technology rather than with the business problem. A consultant who arrives with a pre-built framework for AI adoption, maps your business onto it, and then recommends a set of tools is not doing strategy. They are doing taxonomy. The question is not “which AI tools could we use?” The question is “what are the two or three commercial outcomes that would most improve this business, and is AI the most effective path to any of them?” Those are different questions and they produce very different answers.
The second failure mode is treating AI strategy as separate from data strategy. This is where most organisations discover the gap between the promise and the reality. Generative AI tools can produce impressive outputs in demos. In production, they are only as useful as the inputs they receive. If your customer data is fragmented across four platforms, your campaign performance data lives in spreadsheets that three different people maintain differently, and your content is stored in a shared drive that no one has organised since 2019, no AI strategy is going to rescue you. The consultants who tell you this upfront are the ones worth working with.
The third failure mode is underestimating the change management requirement. I have watched this play out repeatedly. A team gets excited about an AI capability, a pilot produces good results, a rollout is planned, and then six months later the tool is barely being used. Not because it stopped working, but because no one owned the process change, the training was a one-hour session that nobody retained, and the team reverted to the workflows they knew. AI strategy that does not include a serious plan for capability building and process redesign is not a strategy. It is a shopping list.
What a Credible AI Strategy Engagement Looks Like
A well-structured AI strategy engagement typically moves through four phases, though the names vary by firm.
The first is discovery: understanding the business context, the commercial priorities, the existing technology infrastructure, the data landscape, and the team’s current capability. This phase should take longer than most clients expect. If a consultant is ready to present recommendations after two days of stakeholder interviews, they have not done the work.
The second is opportunity mapping: identifying where AI can create value, ranked by potential impact and feasibility given the current state. This is where the use case lists live, but they should be filtered through the diagnostic rather than presented as a generic menu. The ranking matters as much as the list. A team with limited capacity and a fragmented data infrastructure should not be attempting five AI initiatives simultaneously. They should be attempting one, doing it properly, and building from there.
The third is roadmapping: sequencing the initiatives, identifying the dependencies, defining the success metrics, assigning ownership, and being explicit about what needs to change in terms of process, tooling, and capability before each initiative can succeed. This is the phase that separates useful consulting from expensive slide production.
The fourth is implementation support: which may or may not be in scope depending on the engagement model, but which is where the strategy either holds up or falls apart. The consultants who stay involved through implementation have a much stronger incentive to produce a roadmap that is actually executable. The ones who hand over a document and disappear have a weaker one.
Resources like the Semrush overview of AI in marketing and Ahrefs’ practical AI tools webinars are useful for building internal literacy before an engagement, so that the conversations with consultants are grounded rather than exploratory.
How to Evaluate an AI Strategy Consultant Before You Hire One
The market for AI consulting is not short of options. The challenge is distinguishing between firms that have genuine depth and firms that have repackaged their existing practice with an AI label. A few questions that tend to separate the two.
Ask them to describe a client engagement where the recommendation was not to pursue AI, or to significantly limit the scope of AI adoption. If they cannot produce an example, that tells you something. A consultant who recommends AI in every situation is not doing strategy. They are selling a product.
Ask them what percentage of their engagements result in a data infrastructure recommendation before any AI implementation begins. If the answer is low, they are either working with unusually well-prepared clients or they are skipping the diagnostic.
Ask them how they measure the success of their engagements. Vague answers about transformation and capability building are a warning sign. Specific answers about revenue impact, cost reduction, time-to-output improvements, or measurable changes in campaign performance are what you are looking for.
Ask them about their approach to change management and capability building. If this is a footnote in their methodology rather than a core component, expect implementation to be harder than the roadmap suggests.
And ask them to show you the outputs of a previous engagement, even in anonymised form. A well-structured roadmap from a previous client tells you far more about how a firm thinks than any credentials deck.
Early in my career, when I was still figuring out how agencies worked, I learned a version of this lesson the hard way. I commissioned a piece of strategic work that produced a beautifully formatted document and almost nothing actionable. The recommendations were correct in the abstract and completely disconnected from what we could actually execute with the team and budget we had. I have been sceptical of strategy that does not account for operational reality ever since. The best consultants I have worked with over the years have always started by asking what we could not do before they started recommending what we should.
Where AI Strategy Creates Real Commercial Value in Marketing
Across the marketing functions where AI is genuinely changing the economics, a few areas stand out as consistently high-value when the conditions are right.
Content production at scale is the most obvious. The ability to produce briefs, first drafts, variations, and localised versions faster than human teams can manage alone is real and measurable. The constraint is not the technology. It is having a clear enough editorial standard and a strong enough review process to ensure that speed does not come at the cost of quality. Organisations that have invested in defining their content standards before deploying AI tools get significantly more value than those that treat it as a shortcut. Tools like those covered in HubSpot’s roundup of AI copywriting tools have matured considerably, but they still require skilled human direction to produce work that is commercially useful rather than generically competent.
Paid search and paid social optimisation is another area where the value is real. Automated bidding has been part of the landscape for years, but the newer generation of AI-assisted campaign tools is genuinely changing what is possible in terms of audience segmentation, creative testing, and budget allocation. I spent a significant part of my career managing large-scale paid search programmes, including a period at lastminute.com where I saw how quickly a well-structured campaign could generate revenue when the targeting and messaging were right. What AI changes is the speed and scale at which you can test and iterate. What it does not change is the need for clear commercial objectives and honest measurement.
SEO is a more complicated case. The tools are genuinely useful for keyword research, content gap analysis, and technical audits. Resources like Moz’s analysis of AI tools for SEO and Ahrefs’ SEO-specific AI webinar are worth reviewing if this is a priority area. But the strategic questions around how AI-generated content affects search visibility, how answer engines change the value of organic traffic, and what the medium-term implications are for content-heavy marketing strategies are genuinely open. A consultant who presents a definitive answer to these questions should be treated with scepticism. The honest answer is that the landscape is shifting and the right approach is to monitor closely, test carefully, and avoid over-committing to any single content strategy until the picture is clearer.
Customer data analysis and segmentation is an area where AI can create significant value for organisations with the data infrastructure to support it. Predictive models for churn, lifetime value, and propensity to convert have been available for some time, but they are becoming more accessible and more accurate. The constraint, again, is data quality. A predictive model built on incomplete or inconsistent customer data will produce confident-looking outputs that are not reliably useful. The diagnostic work on data infrastructure is not a prerequisite that can be skipped in the interest of getting to the interesting part.
The Internal Capability Question Most Organisations Get Wrong
One of the more consistent patterns I have observed is organisations investing in AI strategy consulting without a clear plan for what happens to the capability after the engagement ends. The consultant leaves. The roadmap exists. And then the question of who owns it, who has the skills to execute it, and how the team’s day-to-day workflows need to change to accommodate it remains largely unresolved.
This is not a criticism of consultants specifically. It is a structural problem with how many organisations commission strategy work. The engagement is scoped as a planning exercise rather than a capability-building exercise, and the two are not the same thing.
The organisations that get the most value from AI strategy engagements tend to do a few things differently. They involve the people who will be responsible for implementation in the diagnostic phase, not just the senior stakeholders. They build training and process redesign into the scope of the engagement rather than treating it as a follow-on. They define what success looks like in measurable terms before the engagement begins, so that the recommendations can be evaluated against a clear standard. And they assign a named internal owner for each initiative in the roadmap, with the authority and the time to actually execute it.
When I grew an agency from around 20 people to over 100, the hardest part was never the strategy. It was building the internal capability to execute the strategy consistently at scale. The same principle applies here. AI strategy is not a planning problem. It is a capability and change management problem that requires a planning document to organise it.
Build vs. Buy vs. Advise: Choosing the Right Model
Organisations approaching AI strategy have three broad options, which are often presented as mutually exclusive but are more useful when thought of as complementary.
Building internal capability means hiring or developing people who can own AI strategy and implementation from the inside. This is the right long-term answer for most organisations of meaningful scale, but it takes time and the talent market is competitive. The risk is that you spend twelve months building a team while the external environment continues to shift.
Buying external advice means commissioning a consulting engagement of the kind described above. This is faster and can provide a useful external perspective, but it requires careful scoping to ensure that the output is actionable rather than aspirational, and that capability is transferred rather than retained by the consultant.
Using AI tools directly, without a formal strategy engagement, is what most organisations are doing right now. Individual team members are experimenting with tools, some workflows are changing informally, and a patchwork of adoption is emerging without a coherent framework. This is not inherently wrong. Some of the most useful AI implementations I have seen started as informal experiments rather than formal programmes. The risk is that without a strategic framework, the experimentation remains fragmented and the cumulative value is lower than it could be.
The practical answer for most organisations is a combination: a focused external engagement to establish the strategic framework and identify the highest-priority initiatives, followed by a deliberate internal capability-building programme, supported by the tools and processes identified in the strategy. The Semrush overview of AI optimisation tools is a reasonable starting point for understanding what is available at the tool level, as is Moz’s analysis of AI tools for technical teams if your priorities include technical marketing infrastructure.
There is a broader context to all of this that is worth keeping in mind. AI strategy in marketing does not exist in isolation from the rest of the discipline. The fundamentals of what makes marketing effective, clear positioning, genuine audience understanding, measurable commercial objectives, and creative work that earns attention, do not change because the tools change. The organisations that are getting the most from AI right now are, in my observation, the ones that were already doing those fundamentals well. AI amplifies good marketing practice. It does not substitute for it. If you want to explore how AI fits into the broader marketing picture, the AI Marketing hub covers the landscape in more depth, including the areas where the technology genuinely changes the game and the areas where the claims are still ahead of the evidence.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
