What Your Data Is Telling You About Creative

Data insights for improving creative marketing content work best when you stop treating data as a verdict and start treating it as a conversation. The numbers tell you what happened. They rarely tell you why, and they almost never tell you what to make next.

That distinction matters more than most marketing teams acknowledge. I’ve seen briefs built entirely on last quarter’s performance data produce creative that was perfectly optimised for an audience that had already moved on. The data was accurate. The creative was stale before it launched.

Key Takeaways

  • Performance data tells you what resonated historically, not what will resonate next. Creative decisions need both retrospective data and forward-looking signals.
  • Most creative testing programmes measure the wrong things: click-through rates and conversion percentages say nothing about whether your message is landing with new audiences.
  • Qualitative data (session recordings, search query reports, customer verbatims) is systematically underused in creative briefing, despite being the most direct signal of audience intent.
  • The relationship between data and creative should be iterative, not hierarchical. Data informs the brief; it does not write the brief.
  • Over-indexing on lower-funnel data produces creative that captures existing demand rather than generating new demand, which limits growth over time.

Why Most Teams Use Data Backwards in Creative Development

The standard workflow goes something like this: campaign runs, data comes in, team reviews performance, learnings feed into the next brief. It sounds logical. In practice, it creates a feedback loop that optimises for what already worked, with a shrinking audience, in a context that no longer exists.

I spent years managing significant ad spend across industries from financial services to FMCG, and the pattern was consistent. Teams would pull their best-performing creative assets, identify the common elements (a certain colour palette, a particular headline structure, a specific call to action), and brief the next round accordingly. Then they’d wonder why performance plateaued after a few cycles.

What they were doing was mining diminishing returns. The creative that performed well did so partly because it was new. Repeating its structure too closely removes the novelty, and novelty is a significant driver of attention. You end up with creative that looks like your best work but performs like your average work, because the audience has already processed the underlying pattern.

The better approach is to use performance data to understand audience behaviour, not to reverse-engineer creative formulas. There’s a meaningful difference between “our long-form video outperformed our static ads” and “our audience engages more deeply when we lead with the problem rather than the solution.” The first tells you about a format. The second tells you something about psychology. One of those is worth building a brief around.

If you’re thinking about how data fits into a broader go-to-market approach, the Go-To-Market and Growth Strategy hub covers the strategic frameworks that sit above individual creative decisions.

The Data Sources That Actually Improve Creative (And the Ones That Don’t)

Not all data is equally useful for creative development. There’s a hierarchy, and most teams are working from the bottom of it.

At the bottom: vanity metrics. Impressions, reach, follower counts. These tell you about distribution, not resonance. They have almost no bearing on whether your creative is doing its job.

One level up: engagement and conversion metrics. Click-through rates, time on page, conversion percentages. These are useful, but they’re lagging indicators measured against an audience that was already in market. They tell you how well you captured existing intent. They say very little about whether your creative is building brand salience with people who aren’t ready to buy yet. This is a critical blind spot, and it’s one I’ve written about before in the context of performance marketing’s tendency to claim credit for demand it didn’t create.

The data sources that genuinely improve creative are the ones most teams underuse:

Search query reports. What exact language are people using when they’re looking for what you sell? Not keyword categories, actual queries. The gap between how a brand describes its product and how customers search for it is often where the most useful creative insights live. I’ve seen brands spend months A/B testing headline copy when the answer was sitting in their search query report the whole time.

Customer verbatims. Reviews, support tickets, sales call transcripts, survey open-text fields. This is your audience’s vocabulary, unfiltered. When a customer writes “I was sick of not being able to see what was happening until it was too late,” that’s a creative brief in one sentence. No amount of quantitative analysis produces that level of specificity.

Session recordings and heatmaps. Tools like Hotjar give you behavioural data that explains why conversion rates look the way they do. If users are dropping off consistently at the same point on a landing page, that’s a creative and messaging problem as much as a UX problem. Fixing the layout without fixing the message is treating a symptom.

Audience overlap and affinity data. Understanding what else your audience reads, watches, and buys gives you context for creative tone, reference points, and cultural positioning. It’s the difference between knowing your audience is “25-44, urban, high income” and knowing they follow specific publications, care about particular issues, and respond to a certain register of communication.

How to Brief Creative Using Data Without Killing the Creative

This is where most data-driven creative processes go wrong. The data gets handed to the creative team as a set of constraints rather than a set of provocations. “The data shows our audience prefers short-form content” becomes a brief that says “make it short.” That’s not a brief. That’s a format spec.

I think about a Guinness brainstorm I was pulled into early in my career at a digital agency. The founder handed me the whiteboard pen mid-session and left for a client meeting. My immediate internal reaction was something close to panic. But it forced me to think about what the data and the brief were actually asking for, rather than what the room expected. The insight that emerged wasn’t from the data alone. It came from asking what the data was pointing at underneath the numbers.

That’s the skill. Data tells you the shape of the problem. Creative solves the problem. The brief is the translation layer between them.

A data-informed brief should include:

Audience behaviour, not audience demographics. What does this audience do when they encounter the problem your product solves? What do they search for? What language do they use? What alternatives do they consider? Demographics tell you who they are. Behaviour tells you how to reach them.

The message that needs to land, not the format it should take. If your data shows that long-form video outperforms short-form, the insight isn’t “make long videos.” It’s “this audience needs time to process this message before they’re ready to act.” That insight could be executed in long video, in a multi-touchpoint sequence, or in a piece of content that earns time through quality. The format follows the insight, not the other way around.

The competitive context. What does the creative landscape look like in your category? If every competitor is running testimonial-led creative, that’s useful information. It might mean testimonials are table stakes, or it might mean there’s space for a brand that does something different. Understanding your market position shapes what creative needs to do, not just what it needs to say.

The Problem With A/B Testing as Your Primary Creative Signal

A/B testing has become the default methodology for creative optimisation, and it’s both genuinely useful and systematically overrated.

It’s useful because it gives you a controlled comparison of two variables. It’s overrated because most A/B tests are run at too small a scale to be statistically meaningful, test the wrong variables (button colour instead of value proposition), and measure short-term conversion rather than long-term brand impact.

More fundamentally, A/B testing optimises within a defined solution space. If both variants are built on the same flawed brief, the test will tell you which flawed execution is marginally better. It won’t tell you that you’ve been solving the wrong problem.

I’ve judged the Effie Awards, which recognise marketing effectiveness rather than creative awards-show performance. The campaigns that consistently win aren’t the ones that were most rigorously A/B tested. They’re the ones that identified a genuine human insight and committed to it with enough consistency and reach to actually shift behaviour. Data informed those campaigns. It didn’t design them.

The increasing complexity of go-to-market execution means that creative decisions are being made under more pressure and with more data than ever before, which paradoxically makes it harder to see clearly. More signals create more noise. The discipline is knowing which signals to weight.

Balancing Upper and Lower Funnel Creative Signals

Earlier in my career, I made the mistake of overweighting lower-funnel performance data when evaluating creative. It looked like rigour. In retrospect, it was a bias toward the measurable over the meaningful.

Lower-funnel data is clean and immediate. Someone clicked, someone converted, you can trace the path. Upper-funnel data is messier. Brand awareness lifts, consideration shifts, changes in search volume for branded terms. These signals take longer to accumulate and are harder to attribute to specific creative decisions. So teams deprioritise them.

The consequence is creative that gets very good at converting people who were already going to convert, while doing nothing for the much larger group of people who don’t yet know they need what you’re selling. This is a growth ceiling problem dressed up as a performance optimisation success story.

Think about it this way. Someone who walks into a clothes shop and tries something on is far more likely to buy than someone browsing online. But the shop still has to get people through the door. If you spend all your creative budget optimising the fitting room experience and nothing on getting people into the shop, you will eventually run out of people to convert. The same logic applies to marketing funnels. Sustainable growth requires reaching new audiences, not just capturing existing intent more efficiently.

The data insight here is simple but underused: track what’s happening at the top of your funnel with the same rigour you apply to the bottom. Branded search volume trends, share of voice, direct traffic growth. These are imperfect proxies for brand-building effectiveness, but they’re better than ignoring the question entirely.

Building a Creative Intelligence System That Actually Scales

The goal isn’t a one-time data audit that improves your next campaign. It’s a systematic process that continuously feeds better information into creative development. This is harder than it sounds, because it requires different teams to share data in ways that don’t naturally happen in most organisations.

When I was growing an agency from around 20 people to over 100, one of the structural problems we kept running into was the gap between the analytics team and the creative team. The analysts had data that would have been genuinely useful for briefing. The creative team had instincts that would have helped the analysts ask better questions. But they rarely talked, and when they did, it was usually in a post-campaign review rather than at the brief stage.

Fixing that required a process change, not just a culture change. We built a standing creative intelligence review into the planning cycle. Not a post-mortem. A pre-brief. Before any creative work started, the analytics lead would present three to five data-driven audience insights, and the creative lead would respond with what those insights suggested creatively. The conversation that followed was where the real briefing happened.

Forrester’s intelligent growth framework makes a similar point about the need to integrate data and decision-making at the process level, not just the tool level. The technology for creative intelligence is widely available. The organisational discipline to use it consistently is rarer.

For teams building this capability, a few practical principles:

Standardise what you measure across campaigns. Creative intelligence compounds over time, but only if you’re measuring the same things consistently. If you change your metrics every quarter, you can’t build a meaningful picture of what works and why.

Tag creative assets with structured metadata. Format, message type, audience segment, funnel stage, emotional register. When you can filter your performance data by these dimensions, you start to see patterns that aggregate metrics obscure. “Video outperforms static” is less useful than “problem-led video outperforms benefit-led video for audiences in the consideration stage.”

Build in a qualitative review alongside the quantitative one. Numbers tell you what happened. Customer feedback, sales team input, and social listening tell you why. Both are necessary. Neither is sufficient alone.

Treat creative testing as learning, not as verdict. A creative that underperforms isn’t a failure if it teaches you something specific about your audience. A creative that overperforms without explaining why is a missed opportunity. The question after every test should be “what do we know now that we didn’t know before?” not just “which version won?”

BCG’s work on scaling agile capabilities is worth reading in this context, not because creative intelligence is an agile problem, but because the underlying challenge is the same: building organisational systems that learn faster than the environment changes.

Creative decisions don’t sit in isolation. They’re part of a broader commercial strategy, and the data that informs them should be understood in that context. The Go-To-Market and Growth Strategy hub covers how creative fits into the wider picture of market entry, audience development, and commercial planning.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What data should I use to improve creative marketing content?
The most useful data sources for creative development are search query reports (which reveal the exact language your audience uses), customer verbatims from reviews and support interactions, session recordings that show where and why users disengage, and audience affinity data that provides cultural and behavioural context. Standard performance metrics like click-through rates and conversion percentages are useful but measure historical behaviour against an existing audience, not creative effectiveness in the broader sense.
How do you avoid letting data kill creative quality?
Treat data as the input to the brief, not the brief itself. Data should tell you about audience behaviour, language, and context. It should not dictate format, tone, or execution. The brief is the translation layer between what the data reveals and what the creative team needs to solve. When data is handed to creatives as a set of constraints rather than a set of provocations, it produces work that is technically correct and creatively inert.
Is A/B testing reliable for creative decision-making?
A/B testing is useful for comparing specific variables within a defined solution space, but it has significant limitations as a primary creative signal. Most tests run at insufficient scale to be statistically meaningful, test surface-level variables rather than strategic ones, and measure short-term conversion rather than long-term brand impact. A/B testing optimises within a brief. It cannot tell you whether the brief itself is correct.
How do upper and lower funnel data differ in their creative implications?
Lower-funnel data (conversion rates, cost per acquisition, return on ad spend) tells you how effectively your creative is capturing existing demand. Upper-funnel data (branded search volume, share of voice, awareness and consideration metrics) tells you whether your creative is building demand with audiences who don’t yet know they need what you sell. Over-relying on lower-funnel data produces creative that gets better at converting a shrinking pool of already-interested people, while doing nothing to grow that pool.
How do I build a process for using data in creative briefing?
The most effective approach is to build a structured pre-brief review into your planning cycle, where analytics and creative leads review audience data together before the brief is written. Standardise what you measure across campaigns so you can identify patterns over time, tag creative assets with metadata (format, message type, funnel stage) so you can filter performance by creative dimension, and include a qualitative review alongside quantitative analysis. The goal is a system that feeds better information into creative development continuously, not just at post-campaign review stage.

Similar Posts