Generative AI Business Strategy: What Most Companies Get Wrong

Generative AI business strategy, done well, means identifying where the technology creates measurable commercial value and building the organisational capability to capture it. Done badly, it means buying software, running a few pilots, and calling it transformation. Most companies are doing the latter.

The gap between the two is not technical. It is strategic. And the companies closing that gap are not the ones with the most sophisticated models or the largest AI budgets. They are the ones that started with a commercial problem and worked backwards, rather than starting with the technology and working forwards.

Key Takeaways

  • Most generative AI strategies fail because they start with the technology, not the commercial problem it is supposed to solve.
  • The biggest risk is not moving too slowly. It is building AI capability on top of broken processes and hoping the technology fixes the underlying dysfunction.
  • Generative AI creates genuine value in three areas: content production at scale, synthesis of complex information, and acceleration of decision-making cycles.
  • Measurement is the single biggest blocker to AI ROI. If you cannot measure the baseline, you cannot prove the improvement.
  • The organisations winning with generative AI treat it as a production layer, not a strategy layer. Strategy still requires human judgment.

Why Most Generative AI Strategies Are Built on Shaky Ground

I spent several years running an agency that grew from around 20 people to over 100. During that period, I watched every major technology wave hit the marketing industry, and the pattern was always the same. A new capability would emerge, the vendor ecosystem would build a narrative around it, and businesses would rush to adopt it before they had figured out what problem they were actually solving. Programmatic advertising did this. Marketing automation did this. Now generative AI is doing it at a scale and speed that makes everything before it look measured by comparison.

The problem is not the technology. Generative AI is genuinely capable. The problem is the strategic framing. When I judge work at the Effie Awards, the entries that stand out are never the ones that led with the tool. They are the ones that led with the problem. The technology was incidental. The thinking was not. Generative AI strategies that are built to impress a board presentation rather than solve a business problem will produce the same output as every other technology investment that was adopted for the wrong reasons: activity without impact.

If you want a broader view of where AI fits into marketing as a discipline, the AI Marketing hub at The Marketing Juice covers the strategic and practical dimensions across channels, functions, and business types.

Where Generative AI Actually Creates Commercial Value

Stripping away the noise, generative AI creates genuine commercial value in three places. Content production at scale. Synthesis of complex, high-volume information. And acceleration of decision-making cycles where speed has a direct commercial consequence.

Content production is the most obvious. The ability to produce first drafts, variations, and localised versions of content at a fraction of the previous cost is real. I have seen teams that were producing 40 pieces of content a month move to 200 without a proportional increase in headcount. But here is the part that gets quietly skipped over in most strategy documents: volume without quality control is a liability, not an asset. If your content was mediocre before generative AI, you now have the ability to produce mediocre content at industrial scale. The Moz perspective on AI content creation makes a similar point about the relationship between quality standards and AI output. The tool amplifies whatever process sits behind it.

Information synthesis is less discussed but often more valuable. Businesses sitting on large volumes of unstructured data, customer feedback, support tickets, sales call transcripts, research reports, have a genuine use case for generative AI that does not require a single piece of public-facing content to be produced. The ability to surface patterns, extract themes, and generate structured summaries from unstructured inputs can compress weeks of analyst time into hours. This is where I have seen the clearest ROI in practice, because the baseline is measurable and the improvement is not abstract.

Decision acceleration is the third area, and the most nuanced. In businesses where speed of decision-making has a direct commercial consequence, whether that is pricing, campaign optimisation, or competitive response, generative AI can compress the time between data and action. But this only works if the decision-making process was already sound. Faster bad decisions are not a competitive advantage.

The Measurement Problem No One Wants to Talk About

I have spent a large portion of my career fixing measurement problems in organisations that did not realise they had them. When I was turning around a loss-making agency, one of the first things I did was look at how we were attributing value to the work we were doing. What I found was that we were measuring activity, not impact. We were counting outputs and calling them outcomes. The business had no real idea which work was driving commercial results and which was just generating invoices.

Generative AI does not fix this problem. It makes it worse. When you can produce more content, run more campaigns, and generate more outputs than ever before, the temptation to count those outputs as evidence of progress becomes almost irresistible. The volume looks impressive. The dashboards look full. And the underlying question, whether any of it is moving a commercial metric, gets buried under the activity.

Before any organisation commits to a generative AI strategy, it needs to answer one question with genuine honesty: can we measure the baseline? If you cannot measure what is happening now with sufficient precision to detect a meaningful change, you cannot prove that the AI-driven approach is better. You can believe it is better. You can feel it is better. But you cannot prove it. And in a business context, unproven improvement is just spending.

Tools like those covered in Semrush’s overview of AI marketing are useful for understanding the landscape, but the measurement infrastructure has to come from the business side, not the vendor side. No platform will tell you that your measurement is inadequate. They have a product to sell.

How to Build a Generative AI Strategy That Holds Up

The organisations I have seen get this right share a common approach. They do not start with a technology evaluation. They start with a commercial problem that is costing them something measurable, time, money, quality, speed, and they work backwards from that problem to identify where generative AI could help.

The first step is defining the problem with enough precision that you would know if you had solved it. Not “we want to be more efficient with content” but “we are currently spending 120 hours per month producing product descriptions that convert at 2.3%, and we want to reduce the production time by 60% without reducing conversion rate.” That is a problem you can test against. That is a problem where AI either helps or it does not, and you will know which.

The second step is auditing the inputs before you choose the tools. Generative AI is only as good as the information it works with. If your brand guidelines are inconsistent, your tone of voice is undefined, and your factual reference material is scattered across three different systems, the AI will produce output that reflects that chaos. I have watched organisations spend significant budget on AI content tools and then spend equal budget on human editors cleaning up the output, because the inputs were never properly structured. The tool was not the problem. The preparation was.

The third step is starting with a contained scope. Not a pilot in the sense of a low-commitment experiment that nobody takes seriously, but a genuine deployment in a single, well-defined area where the results can be measured cleanly. One content type. One market. One function. Get the measurement right, prove the model, then scale. The Semrush breakdown of AI optimisation tools is useful for understanding what is available at the execution layer, but the sequencing matters. Tools come after strategy, not before it.

The Organisational Capability Question

Technology strategy and organisational capability are inseparable. You can have the best generative AI tools available and still produce nothing of value if the people using them do not understand how to direct them, quality-check them, or integrate their output into a broader workflow.

When I was growing an agency team, one of the consistent lessons was that new capability without new process creates chaos. We would bring in a new platform or methodology, and for the first few months, productivity would actually drop. Not because the capability was wrong, but because the team was trying to absorb it on top of existing ways of working rather than redesigning the workflow around it. The same dynamic applies to generative AI. If you are treating it as a bolt-on to existing processes, you will get bolt-on results.

The capability question has three dimensions. First, can your team write effective prompts? This sounds trivial and is not. The quality of generative AI output is heavily dependent on the quality of the input, and prompt engineering is a skill that needs to be developed and standardised, not left to individual experimentation. Resources like the Ahrefs AI tools webinar series give a practical sense of how practitioners are approaching this at the execution level.

Second, can your team critically evaluate the output? The risk of generative AI is not that it produces obviously bad content. It is that it produces plausible-sounding content that is subtly wrong, factually inaccurate, or tonally off-brand. The people reviewing AI output need to be good enough at the underlying task to catch those errors. If your team cannot write well without AI, they cannot quality-check AI writing effectively either.

Third, do you have the governance structures in place? Who owns the AI output? Who is accountable when it is wrong? What are the approval workflows? These questions are not exciting, but they are the difference between an AI strategy that scales and one that creates a compliance or reputational problem at the worst possible moment. The HubSpot overview of generative AI and cybersecurity is worth reading for any organisation that is processing sensitive information through AI systems, because the risk surface is broader than most strategy documents acknowledge.

The Roles That AI Changes and the Ones It Does Not

There is a version of the generative AI conversation that is about job displacement, and there is a version that is about role evolution. The honest answer is that both are happening, and the distribution depends heavily on the function and the organisation.

In content-heavy functions, the production layer of many roles is being compressed. Tasks that previously took a junior writer a day can now be completed in an hour with AI assistance. This does not necessarily mean fewer people. It can mean the same people working at a higher level, spending less time on production and more time on strategy, editing, and quality control. Whether that is what actually happens depends on whether the organisation redesigns the role or simply expects the same output with fewer people.

The roles that AI does not change are the ones that require genuine commercial judgment. Deciding which problem to solve. Evaluating whether a strategy is sound. Reading a client relationship. Understanding why a campaign worked in one market and failed in another. These are not tasks that generative AI performs. They are tasks that generative AI can support, by surfacing information faster or generating options more quickly, but the judgment call remains human.

Managing hundreds of millions in ad spend across 30 industries has given me a clear view of where human judgment is genuinely irreplaceable. It is not in the execution. It is in the framing. What is the right question to ask? What does this data actually mean in the context of this business? What is the risk that this model is not accounting for? Generative AI is a production layer. It is not a strategic layer. The organisations that confuse the two will find out the hard way.

For a broader view of how AI is reshaping marketing strategy across different channels and business functions, the AI Marketing section of The Marketing Juice covers the landscape in practical, commercially grounded terms.

The Competitive Dimension: What Happens When Everyone Has the Same Tools

One of the questions that does not get asked often enough in generative AI strategy discussions is: what happens to competitive advantage when the tools are commoditised? Because that is where this is heading. The models will become cheaper, the interfaces will become simpler, and the barrier to using generative AI for content production, customer communication, and data synthesis will approach zero.

When that happens, the organisations that built their AI strategy around tool access will find themselves back at parity. The ones that built their strategy around the quality of their inputs, the precision of their measurement, and the strength of their commercial judgment will have a durable advantage. Because those things do not commoditise.

The HubSpot breakdown of AI copywriting tools illustrates how quickly the tool landscape is expanding. There are now dozens of options across every content category. The tools themselves are not the differentiator. What you do with them is. And what you do with them is a function of strategy, measurement, and capability, not software selection.

The same logic applies to SEO. When every competitor can produce AI-assisted content at scale, the organisations that win in search will be the ones with better editorial judgment, stronger topical authority, and more rigorous quality standards. The Ahrefs AI and SEO webinar is a useful reference point for understanding how the search landscape is evolving in response to AI-generated content at scale.

The Honest Version of the ROI Conversation

If someone asks you what the ROI of your generative AI strategy is, and the honest answer is “we are not sure yet,” that is not necessarily a bad sign. It might mean you are being appropriately rigorous about measurement. What is a bad sign is if the honest answer is “we have not set up any way to measure it” or “we are measuring outputs rather than outcomes.”

The ROI conversation for generative AI needs to be grounded in the same commercial logic as any other investment. What was the cost before? What is the cost now? What is the quality difference? What is the revenue or margin impact? These are not complicated questions. They are just questions that require honest answers, and honest answers require measurement infrastructure that many organisations have not built.

The businesses I have seen get the clearest ROI from generative AI are the ones that treated it as a business investment with defined success criteria, not a technology experiment with vague aspirations. They set the baseline before they started. They defined what success looked like in commercial terms. And they measured against those criteria with enough rigour to know whether the investment was working.

That discipline is not glamorous. It does not make for an impressive conference presentation. But it is the difference between a generative AI strategy that creates genuine business value and one that creates a very convincing story about business value. In my experience, the marketing industry has always been better at the latter than it should be. Generative AI does not change that tendency. It just gives it a new vocabulary.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a generative AI business strategy?
A generative AI business strategy is a plan for deploying generative AI tools and capabilities in ways that create measurable commercial value. It starts with a defined business problem, identifies where generative AI can address that problem more effectively than existing approaches, builds the measurement infrastructure to evaluate the impact, and scales from a contained initial deployment. It is distinct from simply adopting AI tools, which is a procurement decision rather than a strategy.
Where does generative AI create the most business value?
Generative AI creates the clearest commercial value in three areas: content production at scale, where it can dramatically reduce the cost and time of producing first drafts and variations; synthesis of complex unstructured information, where it can compress analyst time significantly; and acceleration of decision-making cycles, where speed has a direct commercial consequence. The value in each area depends heavily on the quality of the inputs and the measurement infrastructure around the output.
How do you measure the ROI of generative AI?
Measuring generative AI ROI requires establishing a clear baseline before deployment, defining success in commercial terms rather than output volume, and measuring against those criteria with enough consistency to detect meaningful change. The most common failure is measuring activity, such as content produced or hours saved, rather than outcomes, such as conversion rate, revenue, or margin impact. If you cannot measure the baseline, you cannot prove the improvement.
What organisational capabilities do you need for a generative AI strategy to work?
Three capabilities matter most. First, the ability to write effective prompts, which is a learnable skill that needs to be standardised rather than left to individual experimentation. Second, the ability to critically evaluate AI output, which requires people who are good enough at the underlying task to catch errors that are plausible-sounding but wrong. Third, governance structures that define accountability, approval workflows, and risk management, particularly for organisations processing sensitive information through AI systems.
Will generative AI tools become a commodity, and what does that mean for competitive advantage?
Yes. Generative AI tools are commoditising rapidly, and the barrier to using them for content production, data synthesis, and customer communication is approaching zero. When that happens, tool access will not be a differentiator. The organisations with durable competitive advantage will be those with better quality inputs, more rigorous measurement, stronger editorial judgment, and clearer commercial strategy. Those things do not commoditise in the same way that software does.

Similar Posts