Consumer Packaged Goods Market Research: What the Data Won’t Tell You
Consumer packaged goods market research is the process of gathering and interpreting data about shoppers, categories, competitors, and retail conditions to inform product, pricing, distribution, and marketing decisions. Done well, it reduces commercial risk and sharpens where and how a CPG brand competes. Done poorly, it produces expensive decks that confirm what the team already believed and delays decisions that should have been made months earlier.
The CPG category is one of the most research-saturated environments in marketing. Scanner data, panel data, shopper surveys, focus groups, shelf audits, social listening, and brand trackers all feed into planning cycles. The challenge is rarely a shortage of data. It is knowing which data to trust, which questions to ask before commissioning research, and how to translate findings into decisions that move the business forward.
Key Takeaways
- CPG market research fails most often not because of bad data, but because the wrong questions were asked before the research was commissioned.
- Scanner and panel data tell you what happened in the past. They cannot tell you why, and they say nothing about where the category is heading.
- Segmentation is only useful if it connects to a decision. Segments that cannot be reached, priced to, or activated against are academic exercises.
- Qualitative research in CPG is consistently underused and undervalued, despite being the fastest way to understand the gap between what shoppers say and what they do.
- The most commercially useful market research programs are built around a small number of high-stakes decisions, not a comprehensive audit of everything the team is curious about.
In This Article
- Why CPG Market Research Produces So Much Data and So Few Decisions
- What the Main CPG Data Sources Actually Tell You
- How to Build a Segmentation That Actually Drives Decisions
- Where Qual Fits in a CPG Research Program
- How to Use Competitive Intelligence Without Fooling Yourself
- The Specific Mistakes CPG Brands Make in Research Design
- How to Connect Research to Commercial Planning
- What Good CPG Market Research Actually Looks Like
Why CPG Market Research Produces So Much Data and So Few Decisions
I have sat in enough CPG planning sessions to notice a pattern. The research presentation runs to 80 slides. The first 60 describe the category, the shopper, the competitive set, and the brand. The last 20 contain the implications. Nobody disagrees with the implications. And then the team goes back to doing roughly what it was doing before, because the research did not actually resolve the debate that was causing the inertia.
This is not a data quality problem. It is a question design problem. The most expensive mistake in CPG market research is commissioning a study before agreeing on what decision it is meant to inform. If the team cannot articulate the specific choice the research will help them make, the research will describe the world in detail without changing how anyone acts in it.
Before a single survey is written or a single data feed is pulled, the research brief should answer three questions. What decision are we trying to make? What would we do differently if the data came back one way versus another? And what is the minimum confidence level we need before we act? If those three questions do not have clear answers, the research budget is better spent elsewhere.
If you want a broader view of how to structure research programs that connect to commercial outcomes rather than just generating outputs, the Market Research and Competitive Intel hub covers the full range of approaches, from category analysis to competitive positioning frameworks.
What the Main CPG Data Sources Actually Tell You
The CPG industry runs on a handful of data sources that every brand manager knows by name. Understanding what each source is genuinely good for, and where it breaks down, is more useful than knowing how to pull a report from each platform.
Scanner data from retail point-of-sale systems gives you volume, value, distribution, and pricing at the SKU level across participating retailers. It is the closest thing CPG has to ground truth on what is selling. The limitation is that it is backward-looking by definition, it does not capture all channels, and it tells you nothing about why the numbers moved. A brand that loses two points of market share in a quarter knows something went wrong. Scanner data will not tell it whether the cause was a competitor’s promotion, a distribution gap, a pricing error, or a product quality issue.
Panel data tracks purchase behaviour at the household level over time. It is the right tool for understanding penetration versus frequency, for identifying whether growth is coming from new buyers or existing ones, and for modelling switching behaviour between brands. The weakness is panel size. In smaller categories or regional markets, the sample can be thin enough that the numbers become directional at best.
Brand tracking surveys measure awareness, consideration, preference, and usage over time. They are useful for monitoring the health of a brand relative to competitors and for detecting early signals of equity erosion before it shows up in sales. They are not useful for understanding the mechanisms behind the numbers. A brand tracker that shows declining consideration does not tell you whether the problem is advertising reach, product experience, pricing perception, or distribution.
Shopper research, which includes everything from in-store observation to eye-tracking studies to accompanied shops, is where CPG brands tend to underinvest. It is also where some of the most commercially valuable insights come from, because it captures behaviour at the moment of decision rather than a post-rationalised account of it. I have seen qualitative shopper research overturn months of quantitative analysis because it revealed that shoppers were not reading the claims on pack that the brand had built its entire communication strategy around.
How to Build a Segmentation That Actually Drives Decisions
Segmentation is one of the most commissioned and least used outputs in CPG market research. Most brands have a segmentation sitting somewhere in a shared drive that was built two or three years ago, presented at a planning offsite, and then quietly shelved because nobody could work out how to use it in practice.
The reason segmentations fail is usually one of three things. The segments are defined by attitudes rather than behaviours, which makes them difficult to identify and reach in the real world. The segments are too granular, producing eight or ten distinct groups when the business can only meaningfully target two or three. Or the segmentation was built to describe the category rather than to answer a specific strategic question about where the brand should compete.
A useful CPG segmentation starts with a commercial question. Which shopper groups represent the biggest opportunity for this brand, given its current positioning, price point, and distribution footprint? That question immediately constrains the segmentation design. You are not trying to map every type of shopper in the category. You are trying to identify the groups where the brand has a credible right to win and a realistic path to reaching them.
The segments that tend to be most actionable in CPG are built around purchase occasion and need state rather than demographics or psychographics alone. A shopper buying a snack for a school lunchbox is in a fundamentally different decision-making mode than the same person buying a snack for themselves on a Friday afternoon. The product might be identical. The category, the competitive set, the price sensitivity, and the communication that will reach them are all different.
Once segments are defined, test them against three criteria before committing to them. Can you reach each segment through identifiable media or retail touchpoints? Can you price and promote differently to each segment without creating channel conflict? And is each segment large enough to justify a distinct commercial strategy? If the answer to any of those is no, the segmentation needs to be simplified before it goes anywhere near a planning document.
Where Qual Fits in a CPG Research Program
There is a tendency in CPG to treat qualitative research as a precursor to the “real” research, a soft warm-up before the quantitative study that will produce the numbers the business can act on. This is the wrong way to think about it.
Qualitative research is the right tool for a specific set of problems: understanding the language shoppers use to describe a need (which informs packaging copy and media messaging), identifying the barriers that prevent trial or repeat purchase, and exploring the emotional and social context around consumption occasions. None of those problems are well-served by a survey. A survey can confirm that a barrier exists and measure how widespread it is. It cannot explain what the barrier actually feels like to the person experiencing it, or what would need to change for them to behave differently.
I worked with a food brand that had been trying to grow its evening meal occasion for two years. The brand tracker showed flat consideration. The survey data showed that shoppers found the product “convenient” but not “special enough” for a proper meal. The team had spent months debating whether the answer was a new product format or a premium packaging tier. Six focus groups revealed something the surveys had not captured: shoppers associated the brand with “not really cooking,” which carried a mild social stigma in the context of feeding a family. The solution was not a new product. It was a shift in how the brand positioned the preparation ritual. That insight came from qual, not quant.
How to Use Competitive Intelligence Without Fooling Yourself
Competitive analysis in CPG tends to suffer from two opposite problems. Either teams track competitors obsessively and end up in a permanent reactive mode, chasing every new SKU and promotional mechanic without a clear view of their own strategy. Or they ignore competitive signals entirely until a new entrant has already taken meaningful share.
The useful middle ground is a structured competitive monitoring program that distinguishes between signals worth acting on and signals worth noting. A competitor launching a new flavour extension is worth noting. A competitor changing its price architecture or securing a new distribution partnership is worth acting on, because those moves have structural implications for the category that will not reverse quickly.
Scanner data is the most reliable source for competitive tracking in CPG because it reflects actual shelf performance rather than stated intent. A competitor’s press release about a new product launch tells you nothing about whether shoppers are buying it. Six weeks of scanner data tells you exactly how it is performing against its distribution footprint.
One area where competitive intelligence in CPG is consistently weak is digital. Most CPG brands have a reasonable view of what competitors are doing in-store and on pack. They have a much weaker view of what competitors are doing in search, in social, and in direct-to-consumer channels. Tools like SEMrush’s digital marketing resources can help teams build a picture of competitor digital footprints without commissioning a full agency study. It is not a substitute for category data, but it fills a gap that scanner data cannot address.
The BCG perspective on the economics of information is worth revisiting in this context. The brands that use competitive intelligence well are not the ones with the most data. They are the ones with the clearest view of what they are trying to learn and why it matters to their strategy.
The Specific Mistakes CPG Brands Make in Research Design
After two decades of working with brands across thirty-odd industries, including a significant stretch managing research programs for FMCG clients, a few research design errors come up repeatedly in CPG. They are worth naming directly.
The first is asking shoppers to evaluate concepts in isolation. Concept testing that shows a product to respondents without competitive context produces optimistic results almost every time, because shoppers are not being asked to choose. They are being asked to react. In a real category, a shopper is always choosing between options. Research that does not replicate that trade-off is measuring something that does not exist.
The second is using claimed purchase intent as a proxy for actual purchase behaviour. The gap between what shoppers say they will do and what they actually do at shelf is one of the most well-documented phenomena in consumer research. High purchase intent scores in a concept test do not reliably predict trial rates in market. They are useful for screening and ranking concepts relative to each other. They should not be used to forecast volume.
The third is commissioning research to validate a decision that has already been made. This happens more often than anyone admits. A team has already chosen a new pack design or a new brand architecture. The research is commissioned to give the decision credibility with senior stakeholders. The research design is unconsciously shaped to produce a favourable result. The brand ends up with a deck full of supportive data and no honest assessment of the risks. I have seen this pattern produce some genuinely expensive mistakes.
The fourth is treating the research agency as a vendor rather than a thinking partner. The best research agencies in CPG have seen the same category dynamics play out across dozens of brands. They know which methodologies tend to produce actionable outputs and which ones produce interesting-but-useless outputs. Briefing them with a rigid methodology already specified, rather than a clear commercial question, wastes that expertise.
How to Connect Research to Commercial Planning
The point at which most CPG market research programs lose value is the handoff between the research team and the commercial team. The research is delivered. The findings are presented. And then the research lives in a separate universe from the annual planning process, the trade marketing calendar, and the brand investment decisions.
Closing that gap requires two things. First, research outputs need to be translated into commercial language before they reach a planning meeting. “Shoppers perceive the brand as premium but not worth the price premium” is a research finding. “We have a pricing credibility problem that is suppressing repeat purchase in the 35-50 demographic, which accounts for 40% of category value” is a commercial problem that a planning team can act on.
Second, the research calendar needs to be aligned with the planning calendar. Research that lands in October, when the annual plan is already locked, is decorative. Research that lands in June, when the brand team is building the brief for the following year, has a chance of actually changing something.
I spent a period running performance marketing for a business where the research and the commercial planning functions operated on completely different cycles. The research team produced excellent work. The commercial team made decisions without it, because the timing never aligned. When we finally restructured the research calendar to feed into the planning process at the right moments, the quality of the briefs improved noticeably within one cycle. Not because the research got better. Because it arrived when it could be used.
Forrester’s work on alignment between sales and marketing functions makes a parallel point about how structural disconnects between teams produce worse outcomes regardless of individual capability. The same dynamic applies to research and commercial planning in CPG.
There is also a measurement dimension worth considering here. Once a research-informed decision has been implemented, whether it is a pack redesign, a new occasion strategy, or a pricing adjustment, the team needs a framework for evaluating whether it worked. That means defining success metrics before the change goes live, not after. The research that informed the decision should also inform what you measure to assess it.
For a broader view of how market research connects to competitive strategy and planning, the articles in the Market Research and Competitive Intel section cover the frameworks that tend to be most useful across different commercial contexts.
What Good CPG Market Research Actually Looks Like
A well-designed CPG research program is not comprehensive. It is selective. It focuses the available budget on the questions that carry the most commercial consequence and accepts that some things will remain uncertain.
It starts with a clear decision map: a list of the specific choices the brand team needs to make in the next 12 to 18 months, ranked by commercial impact and current level of uncertainty. High impact, high uncertainty decisions get research investment. Low impact decisions get resolved with existing data or educated judgment.
It uses the right methodology for each question rather than defaulting to a standard toolkit. Pricing decisions need conjoint analysis or price sensitivity modelling, not a brand tracker question. Occasion development needs ethnographic or observational research, not a survey. Communication testing needs exposure to finished or near-finished executions in a realistic context, not a concept board.
It produces outputs that are short enough to be read and specific enough to be acted on. A 12-slide research summary that ends with three clear implications is more valuable than an 80-slide deck that ends with a page of generic recommendations. The job of the research team is not to present all the data. It is to make the decision easier for the people who have to make it.
And it is honest about what the data cannot tell you. Every research program has limits. Sample sizes constrain confidence. Claimed behaviour diverges from actual behaviour. Category conditions change between fieldwork and decision. A research team that presents its findings with appropriate caveats is more useful than one that projects false certainty to make the client feel comfortable. I have always preferred a researcher who tells me what the data does not support over one who tells me only what I want to hear.
The Optimizely perspective on building experimentation programs is a useful parallel here. The brands that get the most value from research are the ones that treat it as an ongoing learning system rather than a periodic audit. They test, measure, learn, and adjust. The research does not answer every question. It reduces uncertainty enough to act with more confidence than you would have had otherwise.
That is what market research is for in CPG, or in any category. Not certainty. Not validation. A more honest picture of the world you are competing in, delivered at a time when you can still do something with it.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
