AI Marketing Agency Selection: 7 Questions That Separate Good from Bad

Choosing an AI marketing agency is harder than it looks, because most of them sound identical. The right criteria come down to seven questions that expose whether an agency genuinely understands AI as a commercial tool or is simply repackaging existing services with a new vocabulary.

After two decades running agencies and buying agency services from the other side of the table, I have learned that the gap between a good pitch and a good agency is wide. AI has made that gap wider, because the category is new enough that almost anyone can claim expertise without being challenged on it.

Key Takeaways

  • Most AI marketing agencies are rebranded digital agencies. The vocabulary has changed; the capability often has not.
  • The single most revealing question you can ask is how the agency has failed with AI and what they learned from it.
  • AI capability without clean data is theatre. Any credible agency will audit your data before promising outcomes.
  • Proprietary AI tools are not automatically better than well-configured commercial platforms. Ask what problem the tool solves, not who built it.
  • Pricing structures reveal priorities. Agencies paid on activity have no commercial incentive to replace activity with efficiency.

If you are building out your understanding of how AI fits into marketing strategy more broadly, the AI Marketing hub on The Marketing Juice covers the commercial fundamentals without the hype.

Why Standard Agency Selection Criteria Fall Short for AI

The usual agency selection process, credentials review, case studies, chemistry meeting, pitch, works reasonably well when you are buying something with a settled definition. Paid search, creative production, SEO. These are mature disciplines with established benchmarks. You can compare agencies on cost per click, creative quality, or organic traffic growth because everyone broadly agrees on what those things mean.

AI marketing has no settled definition. I have seen agencies use the term to describe everything from a ChatGPT subscription bolted onto a content team to genuinely sophisticated machine learning applied to media buying at scale. Both call themselves AI marketing agencies. Both will show you impressive decks.

The problem is compounded by the speed at which the space is moving. When I was growing iProspect from around 20 people to over 100, the discipline of performance marketing was evolving fast enough that agencies could legitimately claim expertise they were still building. AI is doing the same thing now, but faster and with higher stakes. A client who buys the wrong AI capability today is not just wasting budget. They are potentially building workflows and data dependencies around something that will not hold up.

Standard selection criteria assume a stable product. AI marketing requires a different set of questions entirely.

Question 1: What Specific Commercial Problem Does Your AI Solve?

This is the first question I ask, and it eliminates roughly half the field immediately. A credible answer names a problem, explains the mechanism by which AI addresses it, and gives you a concrete example. An answer that begins with “our AI platform enables brands to…” and stays at platform level for the next two minutes is not an answer. It is a deflection.

The commercial problem framing matters because AI is not a goal. It is a method. An agency that leads with the method rather than the problem has its priorities in the wrong order. I have seen this pattern across thirty-odd industries: technology vendors and technology-led agencies consistently overweight the tool and underweight the business context. The clients who suffer most are the ones who were too polite to push back on it.

Specific problems an AI marketing agency should be able to name with precision: reducing cost per acquisition in mature paid channels, improving email personalisation beyond basic segmentation, accelerating content production without degrading quality, or improving audience targeting using first-party data. Vague answers about “enhancing customer experience” or “driving growth through intelligent automation” are not problems. They are brochure copy.

Question 2: How Have You Failed With AI and What Did You Learn?

This is the most revealing question on the list, and almost no one asks it. Every agency will tell you what has worked. The failure question exposes whether they have actually been doing this long enough to have made mistakes, whether they have the intellectual honesty to acknowledge them, and whether they have a culture of learning or a culture of selling.

A good answer might sound like: we ran an AI-driven content programme for a client in a regulated industry and discovered that the hallucination rate in the model we were using made fact-checking costs prohibitive. We now have a different qualification process for which content types are suitable for AI generation. That answer tells you the agency has real experience, understands the failure mode, and has adapted.

When I judged the Effie Awards, the entries that impressed me most were not the ones with the cleanest results. They were the ones where the team clearly understood why something worked, which usually meant they had also understood why earlier iterations had not. The same principle applies here. Agencies that have never failed with AI have either not been doing it long enough or are not being honest with you.

Question 3: What Do You Need From Our Data Before You Can Deliver?

AI capability without clean, structured, accessible data is genuinely useless. More than useless: it is dangerous, because it produces outputs that look authoritative but are built on noise. Any agency that skips a data audit before promising AI-driven outcomes is either inexperienced or not being straight with you.

The question works because it forces the agency to demonstrate whether they understand data infrastructure or whether they assume it will just be available. A credible answer will include questions about your CRM, your first-party data volumes, your attribution setup, your consent framework, and how your data is currently structured. An agency that says “we just need access to your ad accounts and analytics” and moves on is not thinking about AI seriously.

There is a useful secondary question here: what happens if our data is not good enough? A strong agency will have a clear answer about what they can and cannot do with different data quality levels. They will also be honest about the timeline for getting your data to a state where AI can actually do something useful with it. That timeline is often longer than clients expect and longer than agencies prefer to admit.

For a broader look at how AI tools are being applied across marketing disciplines, Buffer’s overview of AI marketing tools gives a useful grounding in what is commercially available right now.

Question 4: Is Your AI Proprietary, Partnered, or Configured Commercial?

This question does not have a right answer, but the way an agency responds tells you a great deal. There are three broad models. The agency has built its own AI tools. It has a partnership with an AI platform provider. Or it uses commercially available AI tools, configured and applied by people who know what they are doing.

Proprietary tools are not automatically better. I have seen bespoke agency technology that was years behind what was available commercially, maintained by a small internal team that could not keep pace with the development cycles of the major AI platforms. The agency sold it as a competitive advantage. It was not. It was a legacy system dressed up as innovation.

The best AI marketing agencies I have encountered are often the ones that are deeply skilled at configuring and applying commercial AI platforms, because they can move with the technology rather than being anchored to a proprietary stack. What matters is not who built the tool. It is whether the people using it understand the underlying mechanics well enough to get reliable commercial outcomes from it.

Ask follow-up questions about model updates: when a major AI platform releases a significant update, how does the agency adapt its processes? If the answer is vague, that is a signal. Semrush’s breakdown of ChatGPT in marketing is a reasonable reference point for understanding how commercial AI tools are being applied in practice.

Question 5: How Is Your Pricing Structured and What Does That Incentivise?

Pricing structures reveal commercial incentives, and commercial incentives shape behaviour more reliably than stated intentions. This is not a cynical observation. It is a structural one.

An agency paid on a percentage of media spend has no financial incentive to reduce your media spend, even if AI-driven efficiency could achieve the same outcomes with less budget. An agency paid on hours has no incentive to automate tasks that currently fill those hours. An agency paid on output volume, content pieces, emails, reports, has no incentive to produce less content of higher quality.

Early in my career, when I was still on the agency side and learning how commercial structures worked, I noticed that the conversations that made clients most uncomfortable were not about strategy or capability. They were about money. Specifically, about whether the agency’s financial model was aligned with the client’s commercial goals. That discomfort is worth pushing through. An AI agency that cannot explain clearly how its pricing model aligns with your outcomes is not yet thinking commercially about what it is selling.

The pricing models most aligned with AI delivery tend to be retainer-plus-performance or outcome-based. Both have risks, but they at least create a shared interest in results rather than activity.

Question 6: How Do You Measure What AI Contributes Specifically?

Attribution in marketing has always been imperfect. AI adds another layer of complexity because the contribution of an AI-driven decision is often embedded within a broader campaign or workflow. Separating what AI did from what the media budget, the creative, and the audience targeting did is genuinely difficult.

A credible agency will not pretend otherwise. What they should have is a methodology for isolating AI impact where possible, an honest acknowledgement of where measurement is approximate, and a clear framework for what they will report and why.

I have spent enough time in performance marketing to have a fairly low tolerance for measurement theatre. Dashboards full of metrics that look impressive but do not connect to business outcomes. Attributed revenue figures that assume last-click causation. Efficiency gains measured against baselines that were never clearly established. AI agencies are not immune to any of this. If anything, the novelty of AI makes it easier to present activity as impact without being challenged.

Ask specifically: what would a controlled test of your AI capability look like? How would we know if it was working or not? An agency that struggles to answer this is not yet thinking rigorously about measurement. Ahrefs’ webinar on AI and SEO is a useful example of the kind of rigorous, evidence-based thinking you should expect from any agency operating in this space.

Question 7: Who on Your Team Actually Builds and Runs the AI Work?

This is the question that gets most awkward, and it is the one most worth asking. AI marketing agencies vary enormously in the depth of technical capability sitting behind the pitch. Some have genuinely skilled data scientists, machine learning engineers, and technical marketers who build and maintain real AI systems. Others have a senior person who understands AI conceptually, a team of marketers who use AI tools, and a pitch deck that makes the whole thing sound more engineered than it is.

Neither model is inherently wrong. The second model can still deliver real value if the tools are good and the marketers are skilled. But you need to know which model you are buying, because the two have very different capabilities, very different failure modes, and very different price points.

Ask to meet the people who will actually be doing the work, not just the people who pitched it. Ask them specific questions about the tools they use, how they handle model drift, how they quality-check AI outputs. If the agency resists this request, that tells you something. When I was running agencies, the teams I was proudest of were the ones who welcomed scrutiny. The ones who were uncomfortable with it usually had something to be uncomfortable about.

For practical context on how AI tools are being used in content and creative work, Crazy Egg’s guide to AI marketing assets and HubSpot’s breakdown of generative AI video tools both give a grounded view of what is actually being applied at team level.

A Note on Red Flags Worth Naming Directly

Beyond the seven questions, there are a handful of signals that should give you pause regardless of how well the pitch went.

An agency that cannot distinguish between generative AI and predictive AI in a conversation about your specific use case does not have the conceptual depth you need. These are different tools with different applications, different data requirements, and different risk profiles. Conflating them is not a minor terminology issue. It suggests the agency is working at a surface level.

An agency that promises AI will reduce your team headcount as a primary selling point is prioritising the wrong outcome. Efficiency is a legitimate benefit. But leading with headcount reduction as the value proposition tells you the agency is thinking about cost, not capability. The best AI marketing applications I have seen expand what a team can do, not simply replace what they were doing.

An agency that has no view on AI ethics, content provenance, or regulatory risk in your specific sector is not thinking about the full picture. These are not abstract concerns. They are commercially relevant ones, particularly in regulated industries, and any agency advising you on AI marketing should have a considered position on them.

For AI-specific applications in content and SEO, Moz’s Whiteboard Friday on generative AI for SEO is worth reviewing before any agency conversation, as it sets a reasonable benchmark for what rigorous thinking in this area looks like. Similarly, Semrush’s coverage of AI email assistants illustrates how the more mature AI marketing applications are being evaluated in practice.

What Good Actually Looks Like

A good AI marketing agency is not the one with the most impressive technology story. It is the one that asks you better questions than you ask them. It wants to understand your commercial goals before it talks about tools. It is honest about what AI can and cannot do for your specific situation. It has a clear methodology for measurement and a realistic view of timelines. And the people doing the work can explain what they are doing and why, without hiding behind jargon or platform complexity.

I built a website from scratch once because no one would give me the budget to commission one. That experience taught me something that has stayed useful across two decades: the people who understand a tool well enough to explain it simply are almost always more capable than the people who make it sound complicated. The same principle applies when you are selecting an agency. Complexity in the pitch is rarely a sign of sophistication. It is usually a sign that someone does not want to be pinned down.

The AI marketing space will mature, the terminology will stabilise, and selection will get easier. For now, the seven questions above are the most reliable filter I know for separating the agencies doing real work from the ones selling a story about it.

There is more on how AI fits into broader marketing strategy, from data infrastructure to channel-level applications, across the AI Marketing section of The Marketing Juice. If you are building a case internally for AI investment or evaluating vendors, the frameworks there are worth working through before you enter any agency conversation.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What should I look for when selecting an AI marketing agency?
Prioritise agencies that can name the specific commercial problem their AI solves, demonstrate real experience including failures, and show a clear methodology for measuring AI contribution separately from broader campaign activity. The ability to explain what they do in plain terms is a more reliable signal than the sophistication of their pitch deck.
How do I know if an AI marketing agency has genuine capability or is just rebranding existing services?
Ask to meet the people doing the actual work, not just the pitch team. Ask them specific questions about how they handle model outputs, quality control, and data requirements. Agencies with genuine AI capability will welcome technical scrutiny. Agencies rebranding existing services will steer the conversation back to case studies and credentials.
Does an AI marketing agency need proprietary technology to be effective?
No. Proprietary AI tools are not automatically superior to well-configured commercial platforms. Many of the most capable AI marketing practitioners work with commercial tools like those from major AI providers, configured by people who understand the underlying mechanics. What matters is the expertise applied to the tool, not who built it.
What data do I need before working with an AI marketing agency?
Any credible AI marketing agency will conduct a data audit before committing to outcomes. At minimum, you will need accessible first-party data, a clear consent framework, structured CRM data, and a functioning attribution setup. Agencies that skip this step and move straight to promising AI-driven results are not thinking about your situation seriously.
How should AI marketing agency pricing be structured?
Pricing structures create incentives, and incentives shape behaviour. Agencies paid on media spend percentages have no financial reason to improve your efficiency. Retainer-plus-performance or outcome-based models tend to align agency and client interests more directly. Ask any agency how their pricing model connects to your commercial goals, not just their service delivery.

Similar Posts