Coca-Cola’s Shift From Product to Image: What It Taught Us About Brand Measurement
Between 1958 and 1969, Coca-Cola made one of the most consequential strategic decisions in advertising history. It stopped selling a fizzy drink and started selling a feeling. The product stayed the same. The marketing changed almost everything else. What makes this period worth studying now is not the creativity, it is what it revealed about how brand value is built, how it is measured, and why the gap between the two has never fully closed.
The shift from physical attributes to image-led advertising was not a philosophical experiment. It was a commercial response to market saturation. When you cannot credibly claim your cola tastes better than everyone else’s, you start competing on something harder to copy: meaning. That logic holds today across almost every category, and yet most marketing measurement frameworks are still built to track the physical, the transactional, and the immediate.
Key Takeaways
- Coca-Cola’s 1958 to 1969 marketing shift was a deliberate commercial response to product parity, not an aesthetic choice, and it redefined how brand value could be built without changing the product itself.
- Image-led advertising creates value that standard performance metrics cannot capture, which is why so many modern analytics dashboards undercount the contribution of brand investment.
- The measurement problem Coca-Cola exposed in the 1960s is structurally the same problem marketers face today: the things that move the needle long-term are the hardest to attribute in the short term.
- Separating brand metrics from performance metrics is not a creative indulgence, it is an analytical necessity if you want an honest view of what your marketing is actually doing.
- Most analytics tools are built to measure demand capture, not demand creation, which means they will always undervalue the kind of marketing Coca-Cola was pioneering in this period.
In This Article
- What Was Coca-Cola Actually Doing Between 1958 and 1969?
- Why This Matters to Anyone Running Marketing Analytics Today
- The Measurement Problem Coca-Cola Exposed Without Knowing It
- What Image Advertising Actually Changes in Consumer Behaviour
- How to Separate Brand Metrics From Performance Metrics Without Losing Either
- The Share of Search Signal Worth Watching
- What Coca-Cola’s Campaigns Tell Us About Dashboard Design
- The Honest Approximation Problem
What Was Coca-Cola Actually Doing Between 1958 and 1969?
The pre-1958 Coca-Cola campaign language was largely functional. It leaned on refreshment, taste, and availability. “The pause that refreshes” was still in circulation. The product was the hero. Then, gradually, the advertising began to change register. The imagery became warmer, more social, more aspirational. The drink was increasingly incidental to the scene being constructed around it.
By the mid-1960s, Coca-Cola’s advertising was explicitly selling belonging, happiness, and shared experience. The 1964 campaign “Things Go Better with Coke” used popular music and youthful energy to position the brand alongside a cultural moment rather than a product benefit. The 1969 “It’s the Real Thing” campaign went further still, asserting authenticity as a brand value rather than making any claim about what the drink tasted like.
This was not accidental drift. It was a considered strategic move, driven by the reality that Pepsi was closing the taste gap in consumer perception and the product itself offered no defensible point of difference that advertising could credibly amplify. The response was to build an emotional moat instead.
Why This Matters to Anyone Running Marketing Analytics Today
I have spent a significant portion of my career looking at marketing dashboards, and the honest observation is that most of them are better at measuring what already happened than explaining what caused it. When I was growing an agency from around 20 people to over 100, one of the recurring arguments I had with clients was about what counted as measurable success. Paid search clicks, yes. Email open rates, yes. The slow accumulation of brand preference that eventually makes someone click an ad they would otherwise have ignored, almost never.
That gap is exactly what Coca-Cola was handling in 1958. They were investing in something that would not show up cleanly in any measurement system available at the time, because the payoff was diffuse, long-term, and embedded in cultural association rather than immediate purchase behaviour. The fact that we now have GA4, attribution modelling, and data-driven marketing frameworks does not mean we have solved this problem. We have just given it better-looking dashboards.
If you want a broader grounding in how analytics frameworks handle (and often mishandle) brand versus performance measurement, the Marketing Analytics hub at The Marketing Juice covers the full landscape, from attribution modelling to GA4 implementation, with the same commercially grounded perspective applied here.
The Measurement Problem Coca-Cola Exposed Without Knowing It
Here is the structural issue. Coca-Cola’s image-led campaigns in this period were building what we would now call mental availability. They were increasing the probability that, at the moment of purchase, a consumer would reach for a Coke rather than anything else, not because they consciously recalled an advertisement, but because the brand had accumulated enough positive emotional residue to feel like the natural choice.
That mechanism is almost impossible to measure in real time. You cannot draw a straight line from a 1965 television spot to a vending machine purchase in 1967. The contribution is real, but it is distributed across time and context in ways that resist clean attribution.
This is not a vintage problem. Forrester has written about how standard measurement approaches undermine the buyer’s experience by focusing too heavily on last-touch signals and too lightly on the cumulative brand interactions that precede conversion. The same critique applies to most GA4 implementations I have reviewed. They are technically sophisticated and commercially incomplete.
When I was judging the Effie Awards, the campaigns that impressed me most were invariably the ones that had found a way to articulate long-term brand contribution alongside short-term performance data. Not one or the other. Both, with an honest account of what each was doing. That combination is rare, and it requires a measurement philosophy that most organisations have not yet built.
What Image Advertising Actually Changes in Consumer Behaviour
The practical effect of Coca-Cola’s image shift was not just emotional resonance. It changed the economics of the brand in measurable ways, even if the measurement was lagged and indirect. Premium pricing became more defensible. Distribution negotiations became easier because retailers wanted to stock what consumers were asking for. Competitive switching reduced because the brand had accumulated associations that a competitor could not replicate simply by matching the product formula.
These are business outcomes. They show up eventually in revenue, margin, and market share data. But they show up months or years after the advertising investment that caused them, which creates a persistent temptation to underinvest in brand and over-rotate toward performance channels that produce faster, cleaner signals.
I have seen this play out in practice more times than I can count. A client cuts brand spend to protect short-term ROAS, performance numbers hold steady for two or three quarters because the brand equity built over previous years is still doing its work, and then the floor drops. By the time the decline registers in the dashboard, the cause is 18 months old and the brand team has already been restructured.
The GA4 event tracking frameworks covered by Moz are genuinely useful for measuring engagement and conversion behaviour, but they will not catch that kind of slow-burn brand erosion. That requires a different set of metrics, tracked over a different time horizon, with a different tolerance for ambiguity.
How to Separate Brand Metrics From Performance Metrics Without Losing Either
The practical lesson from the Coca-Cola period is not that brand advertising is better than performance advertising. It is that they operate differently and need to be measured differently. Treating them as the same thing, or worse, measuring brand campaigns against performance benchmarks, produces systematically misleading results.
A workable separation looks something like this. Performance metrics: conversion rate, cost per acquisition, return on ad spend, revenue directly attributable to campaign activity. These are short-window, high-confidence numbers. They tell you what happened. Brand metrics: aided and unaided recall, brand preference scores, net promoter score trends, share of search, direct traffic growth over time. These are longer-window, lower-confidence numbers. They tell you what is building.
Neither set is sufficient on its own. A business running only on performance metrics is harvesting demand it is not replacing. A business tracking only brand metrics is producing beautiful sentiment data with no line to revenue. The combination, tracked separately but reviewed together, is where honest marketing measurement lives.
MarketingProfs has outlined practical approaches to web analytics that start from a similar premise: know what question you are trying to answer before you decide which metric answers it. That sounds obvious. It is not practised as often as it should be.
The Share of Search Signal Worth Watching
One of the more useful modern proxies for brand health is share of search, the proportion of branded search volume your brand captures relative to competitors in the same category. It is not a perfect measure, but it correlates reasonably well with market share over time, and it is accessible through tools most organisations already have.
If Coca-Cola had been able to track share of search during the 1958 to 1969 period, it would have seen the brand equity accumulation in real time. Instead, the evidence was indirect: sales data, distribution growth, pricing power. The mechanism was the same. The signal was just slower and noisier.
Today, a sustained decline in branded search volume, especially when performance metrics are still holding, is one of the cleaner early warning signs that brand investment has been underfunded. I have used it as a diagnostic tool in several turnaround situations, and it tends to surface problems that the standard reporting pack has been smoothing over.
The GA4 preparation frameworks from Moz are a reasonable starting point for setting up the tracking infrastructure that makes this kind of analysis possible, though you will need to layer brand tracking data from outside GA4 to complete the picture.
What Coca-Cola’s Campaigns Tell Us About Dashboard Design
If you are building a marketing dashboard today, the Coca-Cola case is a useful stress test. Ask yourself: if this brand had been running our current dashboard in 1965, would the image-led campaigns have looked like they were working? In most standard setups, the answer is no. There would have been no direct conversion signal, no last-click attribution, no measurable lift in the metrics the dashboard was designed to track.
That is a dashboard design problem, not a campaign performance problem. Forrester’s thinking on marketing dashboard automation makes the point that automation should follow measurement strategy, not precede it. Build the right questions first. Then build the dashboard that answers them. Most organisations do it the other way around and end up with a very efficient system for measuring the wrong things.
Early in my career, I asked for budget to build a new website and was told no. So I taught myself to code and built it anyway. The experience taught me something about measurement that has stayed with me: the most important things are often the ones the existing system was not set up to capture. The website I built could not be tracked by the reporting infrastructure we had at the time. That did not mean it was not working. It meant the reporting infrastructure needed to catch up.
The same logic applies to brand investment. The absence of a clean signal in your current dashboard does not mean the activity is not generating value. It may mean your dashboard was not designed to see it.
MarketingProfs’ framework for marketing dashboard creation covers the foundational thinking well: start with the decisions the dashboard needs to support, then work backwards to the metrics that inform those decisions. That approach would have surfaced the brand measurement gap much earlier in most organisations I have worked with.
The Honest Approximation Problem
Marketing does not need perfect measurement. It needs honest approximation. That is a distinction worth making carefully, because the pursuit of perfect measurement has led a lot of organisations to invest heavily in attribution technology while quietly abandoning the kinds of brand investment that do not fit neatly into an attribution model.
Coca-Cola’s leadership in 1965 did not have econometric modelling or multi-touch attribution. They had sales data, consumer research, and commercial judgement. They made a bet that image-led advertising would build durable competitive advantage, and they were right. The measurement came later, in the form of market share, pricing power, and brand longevity that is still generating value more than half a century on.
The lesson is not to ignore measurement. It is to be honest about what your measurement system can and cannot see, and to make sure that limitation does not quietly shape your investment decisions in ways you have not consciously chosen.
When I was managing hundreds of millions in paid search spend across multiple clients, the temptation was always to optimise toward the measurable signal. The clients who grew fastest over time were the ones who resisted that pull and maintained brand investment even when the dashboard could not explain why. That is not faith. That is commercial literacy.
The broader question of how analytics frameworks handle the brand versus performance split, and where most organisations are getting it wrong, is covered in depth across the Marketing Analytics section of The Marketing Juice. If you are building or reviewing a measurement framework, it is worth working through the full picture before committing to a particular setup.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
