Questions Every Leader Should Ask Before Setting Marketing Strategy

The questions a leadership team asks before setting marketing strategy reveal more about the business than any slide deck or budget spreadsheet. Ask the right ones and you surface assumptions, expose gaps, and align the room on what actually matters. Ask the wrong ones, or skip them entirely, and you end up with a strategy built on consensus rather than clarity.

Most leadership teams underinvest in this part of the process. They move fast to solutions, treat the planning phase as a formality, and wonder later why the strategy didn’t land. The questions below are the ones worth slowing down for.

Key Takeaways

  • The quality of your marketing strategy is largely determined by the quality of questions asked before it’s written.
  • Most leadership teams skip diagnostic questions and move straight to tactics, which is why so many strategies are activity plans dressed up as strategy.
  • Asking “what problem are we actually solving?” before any planning session will save more time than any framework or template.
  • Commercially grounded leaders ask questions about margin, customer lifetime value, and competitive positioning before they ask about channels or creative.
  • The hardest questions to ask are usually the most important ones, particularly those that challenge existing assumptions about why the business is growing or stalling.

I’ve sat in hundreds of strategy sessions over the years, from scrappy agency pitches to boardroom planning cycles for businesses spending tens of millions on marketing. The sessions that produced real outcomes almost always started with better questions. The ones that produced slide decks started with assumptions.

What Problem Are We Actually Trying to Solve?

This sounds obvious. It isn’t. I’ve been in rooms where a business wanted to “increase brand awareness” and, when pushed, couldn’t articulate what that would change commercially. The awareness target was a proxy for something else, usually a sales problem or a positioning problem, but nobody had named it clearly.

Before any strategy conversation goes further, leaders need to agree on the actual problem. Is it acquisition? Retention? Margin compression? A competitor moving into your space? A product that isn’t landing with the right audience? Each of those requires a different response, and conflating them produces a strategy that tries to do everything and achieves very little.

When I took over a loss-making agency and had to turn it around, the temptation was to treat it as a revenue problem and go hard on new business. But the actual problem was margin. We were winning work and losing money on delivery. Solving the wrong problem would have made things worse faster. Naming the real problem first changed everything that followed.

The discipline here is resisting the pull toward solutions before the problem is properly defined. It’s worth spending 20 minutes at the start of any leadership planning session just on this question, and being willing to push back when the answer is vague.

Do We Know Why Customers Choose Us, or Are We Guessing?

Most businesses have a story they tell themselves about why customers buy from them. That story is usually built from internal assumptions, sales anecdotes, and the founder’s original hypothesis. It’s often wrong, or at least incomplete.

The question leaders need to ask is whether their understanding of customer motivation is based on actual evidence or inherited narrative. There’s a significant difference between “we think customers choose us because of our service quality” and “we’ve spoken to 40 customers in the last six months and consider this they consistently said.”

BCG has written about how understanding evolving customer needs requires continuous interrogation rather than static assumptions, particularly in markets where behaviour is shifting. The same principle applies to any sector. Customer motivation is not fixed, and what drove acquisition three years ago may not be what drives it now.

When I was growing an agency from around 20 people to over 100, our pitch narrative evolved considerably as we learned more about what clients actually valued versus what we assumed they valued. Early on we led with capability. Later we learned that clients cared far more about commercial transparency and accountability. The work hadn’t changed. Our understanding of the buyer had.

Where Is Growth Actually Coming From?

This is a question that cuts through a lot of comfortable narratives. Leaders often have a sense of where growth is coming from, but when you pull the data properly, the picture is frequently more concentrated and more fragile than it appears.

Growth might be coming from one channel, one customer segment, one geography, or one product line. That’s not necessarily a problem, but it’s critical to know. A strategy built on the assumption that growth is broadly distributed, when it’s actually concentrated, will allocate resources badly and create risk that isn’t visible until something breaks.

Forrester’s work on intelligent growth models makes the point that sustainable growth requires understanding the mechanics behind the numbers, not just the numbers themselves. Revenue is an output. The inputs that drive it need to be understood and managed deliberately.

The follow-up questions here are equally important. Is the growth repeatable? Is it dependent on conditions that could change? Is it coming from customers who are genuinely profitable, or are you growing a segment that looks good on the top line but erodes margin on delivery?

If you’re thinking about how these questions connect to broader go-to-market planning, there’s more on that across the Go-To-Market and Growth Strategy hub, which covers everything from market entry to scaling decisions.

What Would Have to Be True for This Strategy to Work?

This is one of the most useful questions in strategic planning and one of the least used. It forces the room to surface assumptions explicitly rather than leaving them buried in the logic of a plan.

Every strategy rests on assumptions. Market size assumptions. Competitive response assumptions. Customer behaviour assumptions. Budget efficiency assumptions. When those assumptions are left implicit, nobody challenges them, and the strategy carries hidden risk that only becomes visible when something doesn’t work.

Asking “what would have to be true for this to work?” makes those assumptions visible and testable. It also creates a natural framework for monitoring the strategy once it’s in flight. If the strategy depends on a certain customer acquisition cost being achievable, that’s something you can track. If it depends on a competitor not responding aggressively to a market move, that’s a risk you can monitor and plan for.

I’ve used this question in pitches as well as internal planning. There’s something clarifying about it that changes the quality of the conversation. It shifts the room from advocacy to analysis, which is where the best strategic thinking happens.

Are We Playing to Win or Playing Not to Lose?

This distinction matters more than most leaders acknowledge. Playing to win means making concentrated bets, accepting that some things won’t get resourced, and being willing to look wrong in the short term. Playing not to lose means spreading resources thinly, hedging everything, and optimising for the appearance of activity over the reality of impact.

A lot of marketing strategies are written in the language of ambition but executed in the logic of risk avoidance. The budget gets spread across eight channels. The messaging tries to appeal to three different audiences. The team is asked to deliver ten priorities simultaneously. The result is a plan that can’t be criticised because it covered everything, and can’t succeed because it committed to nothing.

BCG’s writing on aligning brand strategy with go-to-market execution makes the point that strategic coherence requires making choices, not just setting directions. That means some things get resourced properly and others get cut or deferred. Leaders who can’t make those calls produce strategies that look comprehensive on paper and underdeliver in practice.

The early days of my agency career taught me this quickly. When you’re in a room and someone hands you the whiteboard pen because the founder has to leave, you learn fast that hedging is not a strategy. You either make a call or you fill the board with options and let everyone leave without a decision. The former is uncomfortable. The latter is useless.

Do We Have the Capability to Execute This, or Are We Planning in Theory?

Strategy that outpaces capability is not ambitious. It’s just wrong. One of the most common failure modes in marketing planning is the gap between what gets written in the strategy document and what the team can actually deliver with the people, tools, and budget available.

Leaders need to ask this question honestly, which means not just checking whether the budget is there but whether the skills are there, whether the processes are in place, and whether the organisation is structured to execute the plan as designed. A content strategy that requires a team of two to produce four times their current output is not a strategy. It’s a wishlist.

Forrester’s research on agile scaling highlights how capability gaps compound as organisations try to scale execution. What works at one level of output frequently breaks at the next, and the planning process needs to account for that rather than assume that more ambition automatically produces more result.

When I was restructuring an agency that had been loss-making for a sustained period, one of the most important decisions was being honest about what we could actually deliver well, cutting the rest, and rebuilding from a position of genuine capability rather than theoretical range. It’s a harder conversation to have than “we can do everything.” It’s also the one that produces a plan that works.

How Will We Know If This Is Working?

This question should be asked before the strategy is agreed, not after it’s been running for six months. If you can’t define what success looks like in measurable terms at the outset, you’re not setting a strategy. You’re setting intentions.

The measurement question also forces clarity on the strategy itself. If you can’t describe what would be different in the business if the strategy worked, the strategy probably isn’t specific enough. Vague strategies produce vague metrics, which produce vague accountability, which produces the kind of annual review where everyone agrees it was a mixed year and moves on.

Tools like Hotjar’s growth feedback loops demonstrate that measurement works best when it’s built into the process from the start rather than bolted on as a reporting afterthought. The same logic applies to strategy. Measurement frameworks designed after the fact tend to measure what’s easy rather than what matters.

The discipline is agreeing on three to five metrics that genuinely reflect whether the strategy is delivering, and being willing to have difficult conversations if those metrics aren’t moving. Not every metric needs to be a revenue number. But every metric should have a clear line of sight to a commercial outcome.

What Are We Willing to Stop Doing?

This is the question that most planning sessions never get to, and it’s often the most important one. Strategy is as much about what you choose not to do as what you commit to doing. But stopping things is politically uncomfortable, so it gets avoided.

Marketing teams accumulate activity over time. Channels get added. Campaigns get launched. Reports get built. Processes get established. Very few of these things get stopped, even when they’ve stopped producing meaningful return. The result is a team that is permanently busy and permanently underresourced for the things that actually matter.

Asking “what are we willing to stop doing?” forces a real conversation about priorities and trade-offs. It also surfaces the political dynamics in the room. The things that are hardest to stop are usually the things that are most attached to someone’s identity or tenure, which is useful information for a leader to have.

When I was managing a turnaround, stopping things was as important as starting them. Cutting whole departments and service lines that weren’t profitable freed up the capacity and focus to do the remaining work properly. It was not a popular set of decisions. It was the right set of decisions. The business moved from significant loss to significant profit, and that didn’t happen by adding more. It happened by doing less, better.

Are We Aligned on What Marketing Is Actually For?

This question sounds foundational to the point of being unnecessary. In practice, leadership teams frequently have meaningfully different views on what marketing is supposed to deliver, and those differences create friction that undermines execution.

Some leaders see marketing primarily as a demand generation function. Others see it as a brand and reputation function. Some treat it as a sales support function. Others expect it to own customer retention and lifetime value. These are not the same job, and a team trying to serve all of those definitions simultaneously will struggle to do any of them well.

Getting alignment on this before setting strategy is not a soft exercise. It’s a commercial necessity. The scope of marketing determines the budget, the team structure, the metrics, and the relationship with sales. Getting it wrong at the outset creates misalignment that compounds throughout the year.

The go-to-market and growth strategy work I’ve found most effective over the years starts here, with an explicit conversation about what the marketing function is accountable for and what it isn’t. Everything else follows from that. There’s more thinking on this across the Go-To-Market and Growth Strategy hub if you’re working through how to frame that conversation in your own organisation.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What questions should leaders ask before setting a marketing strategy?
The most important questions cover the actual problem being solved, the evidence behind customer motivation, where growth is genuinely coming from, what assumptions the strategy rests on, and whether the team has the capability to execute. These questions surface the gaps and misalignments that undermine most strategies before they get started.
Why do so many marketing strategies fail to deliver results?
Most marketing strategies fail because they are built on unexamined assumptions, try to do too many things simultaneously, and lack clear measurement frameworks agreed at the outset. The planning process moves too quickly to tactics without properly diagnosing the problem or aligning the leadership team on what the strategy is actually supposed to achieve.
How should a leadership team approach the strategy planning process?
Start by defining the actual business problem before discussing solutions. Establish what you know about customer behaviour based on evidence rather than assumption. Agree on what success looks like in measurable terms before committing to a direction. And be explicit about what the team is willing to stop doing, not just what it plans to start.
What does “playing to win” mean in a marketing strategy context?
Playing to win means making concentrated resource bets on the highest-value opportunities rather than spreading budget and effort thinly across everything. It requires accepting that some channels, audiences, or initiatives won’t get resourced in a given period. Strategies that try to hedge every decision tend to produce activity rather than outcomes.
How do you measure whether a marketing strategy is working?
Define three to five metrics before the strategy launches that have a clear line of sight to a commercial outcome. Avoid metrics that measure activity rather than impact. Agree on what movement in those metrics would indicate the strategy is working, and what would indicate it needs to change. Measurement frameworks built after the fact tend to measure what’s convenient rather than what matters.

Similar Posts