Conjoint Analysis: What Customers Won’t Tell You Directly

Conjoint analysis is a market research method that reveals how people make trade-offs between product features, price, and other attributes when making purchasing decisions. Instead of asking customers what they want (which produces optimistic, unreliable answers), conjoint analysis presents them with realistic choices and infers their true preferences from the decisions they make.

The result is a ranked, quantified view of what actually drives purchase behaviour, not what people claim drives it. For pricing decisions, product development, and positioning work, that distinction is worth a great deal.

Key Takeaways

  • Conjoint analysis reveals actual purchase trade-offs by observing choices, not by asking customers to rate features in isolation, which consistently inflates stated importance.
  • The method is most valuable for pricing decisions, feature prioritisation, and positioning work where the cost of being wrong is high.
  • Choice-based conjoint (CBC) is the most widely used format because it mirrors how people actually shop, making the data more predictive of real behaviour.
  • Conjoint outputs are only as good as the attributes and levels you design into the study. Garbage in, garbage out applies here more than almost anywhere in research.
  • The most common mistake is running conjoint analysis after decisions have already been made, using it to validate rather than inform strategy.

Why Standard Customer Research Fails on Feature Preference

Early in my career, I sat through more customer research readouts than I can count where the findings were some variation of: customers want better quality, faster delivery, and lower prices. Every time. From every audience. In every category. The insight was useless because the question was useless.

When you ask people what they want, they tell you everything. When you ask them to choose between two realistic options, they tell you what they actually value. That is the core problem conjoint analysis was designed to solve.

Traditional survey approaches ask respondents to rate or rank features independently. How important is next-day delivery? Very important. How important is a lower price? Very important. How important is a premium brand name? Also very important. Everything ends up important because there is no cost to saying so. Respondents are not being dishonest. They are just not being forced to make the kind of real-world compromises that purchasing decisions require.

Conjoint analysis solves this by embedding trade-offs directly into the research design. You cannot say you want everything when you are asked to pick between Option A (next-day delivery, higher price, mid-tier brand) and Option B (three-day delivery, lower price, premium brand). The choice itself reveals the preference hierarchy.

This is also why conjoint is particularly valuable for pricing work. Asking someone what they would pay for a product almost always produces a number lower than their actual willingness to pay. Presenting them with a realistic set of options that include price as one of several attributes produces a far more honest signal.

How Conjoint Analysis Actually Works

The mechanics are worth understanding even if you are commissioning rather than running the research yourself, because the design decisions made at the outset determine how useful the output will be.

A conjoint study begins with identifying the attributes you want to test. These are the dimensions of the product or service that vary and that customers might care about. Price, delivery speed, warranty length, brand, colour, packaging format, and contract terms are all examples. You then define the levels within each attribute. Price might have three levels: £19.99, £29.99, and £39.99. Delivery speed might have two: next day and three to five days.

The research instrument then presents respondents with a series of choice tasks. Each task shows two or more product configurations (called profiles) and asks the respondent to choose their preferred option, sometimes with a “none of the above” option included. Across many such tasks, statistical analysis extracts the relative importance of each attribute and the utility (called a part-worth) associated with each level within that attribute.

The output tells you, quantitatively, how much each feature contributes to overall product preference. You might find that price accounts for 40% of the purchase decision, delivery speed for 25%, brand for 20%, and warranty for 15%. Within price, you can see how much utility drops between £19.99 and £29.99 versus between £29.99 and £39.99, which tells you where the real price sensitivity sits.

Most modern conjoint platforms also allow you to run market simulations using the part-worth data. You can model hypothetical product configurations and estimate the share of preference each would capture against a defined competitive set. This is where the commercial value of conjoint becomes most tangible.

The Main Types of Conjoint and When to Use Each

There are several variants of conjoint analysis in common use. They differ in how choice tasks are structured and what statistical methods are used to analyse the results.

Choice-based conjoint (CBC) is the dominant format for most commercial research. Respondents choose between two or more complete product profiles in each task, mirroring how purchase decisions actually happen in a retail or online environment. The statistical approach (typically hierarchical Bayes estimation) allows you to model individual-level preferences even from relatively small samples, which makes CBC both flexible and powerful.

Adaptive conjoint analysis (ACA) was designed for studies with a large number of attributes. The survey adapts in real time based on earlier responses, focusing subsequent questions on the attributes that appear most relevant to each individual respondent. This reduces respondent fatigue and keeps the task manageable when you are testing eight or more attributes. The trade-off is that ACA is less suited to price sensitivity analysis than CBC.

MaxDiff (maximum difference scaling) is sometimes grouped with conjoint methods, though technically it is a separate technique. Respondents identify the most and least preferred items from a set, rather than choosing between full product profiles. MaxDiff is excellent for prioritising a long list of features or messages, but it does not allow you to model the interaction effects between attributes the way CBC does.

Menu-based conjoint is a more recent variant designed for categories where customers configure their own purchase from a set of components, such as software packages, insurance products, or restaurant orders. It is technically more complex but produces highly realistic data for those categories.

For most marketing and product strategy applications, CBC is the right starting point. It is well understood, widely supported by research platforms, and produces output that is directly actionable for pricing and positioning decisions.

Where Conjoint Analysis Earns Its Place in Strategy

I have seen conjoint used well and used badly. The difference is almost always about whether it was deployed to answer a real business question or to add methodological weight to a decision that had already been made.

The situations where conjoint genuinely earns its place are fairly specific.

Pricing decisions are the clearest use case. If you are launching a new product or reconsidering your pricing architecture, conjoint gives you something that no amount of competitor benchmarking or customer interviews can produce: a quantified estimate of willingness to pay at different price points, modelled against realistic product configurations. When I was working with a client in a highly competitive B2B software category, the conjoint data showed that a price increase of around 15% would cause minimal share-of-preference loss if accompanied by a specific service-level improvement. That is a commercially significant finding that you cannot get from a focus group.

Feature prioritisation for product development is the second major application. Engineering and product teams frequently face decisions about which features to build next. Conjoint gives those decisions a customer-grounded foundation. You are not choosing between features based on internal opinion or the loudest voice in the room. You are choosing based on quantified evidence of what customers will actually value in context.

Positioning and messaging strategy is a less obvious but equally valid use. Conjoint can test whether your brand name, your quality tier, your sustainability credentials, or your guarantee structure meaningfully moves purchase preference. When a positioning attribute shows up with high importance in a conjoint study, it tells you something useful about where to concentrate your communications effort.

Competitive scenario modelling is the fourth area. Once you have part-worth data, you can simulate what happens to your share of preference if a competitor reduces their price, adds a feature, or changes their brand positioning. This kind of scenario planning is genuinely useful for strategy and, in my experience, rarely done with any rigour in most marketing teams. The Market Research and Competitive Intel hub covers related approaches to building this kind of structured intelligence into planning cycles.

Designing a Conjoint Study That Produces Usable Output

The quality of conjoint output depends almost entirely on the quality of the study design. This is where most projects go wrong, and where the investment of time at the front end pays the largest dividends.

Attribute selection is the first critical decision. You should only include attributes that genuinely vary in your market and that customers could plausibly notice and respond to. Including too many attributes increases respondent fatigue and dilutes the signal. Including irrelevant attributes wastes the respondent’s attention and your analysis budget. A well-designed CBC study typically tests between four and seven attributes.

Level design matters just as much. The levels within each attribute need to span a realistic range. If your price levels are too close together, the study will not detect meaningful price sensitivity. If they are too far apart, you are testing scenarios that do not reflect the actual market. The levels also need to be believable in combination. Presenting a profile that combines the lowest price with the highest quality and the fastest delivery will confuse respondents and corrupt the data.

Sample design deserves careful thought. Conjoint is not a qualitative method. You need adequate sample sizes to produce stable part-worth estimates, and if you want to segment the data by customer type, you need enough respondents within each segment to make the sub-group analysis meaningful. The exact numbers depend on the study design, but a CBC study with five attributes and three levels each typically requires at least 150 to 200 respondents for reliable aggregate results.

Respondent qualification is often underestimated. Conjoint data is only useful if the people completing the survey are actual or realistic potential buyers in the category. Recruiting broadly to hit a sample size target, and then filtering in analysis, produces weaker results than recruiting tightly from the outset.

One thing I always push for is a qualitative phase before the conjoint design is finalised. Even a handful of depth interviews or a small focus group will surface attributes and language that you would not have thought to include, and will prevent you from testing a beautifully designed study around the wrong questions. The six marketing questions framework from Copyblogger is a useful reference for thinking about what you actually need to understand before you design any research instrument.

Reading and Applying Conjoint Output

The outputs from a conjoint study are more nuanced than most research deliverables, and they require some care in interpretation.

Attribute importance scores tell you the relative weight each attribute carries in the overall purchase decision. These are expressed as percentages that sum to 100. An attribute with 35% importance is more than twice as influential as one with 15% importance. This is useful for prioritisation, but it is an average across the sample. Segment-level analysis often reveals that different customer groups have very different importance structures, which has direct implications for targeting and product architecture.

Part-worth utilities tell you the value associated with each level within an attribute. A positive utility means the level increases purchase preference; a negative utility means it decreases it. The magnitude of the difference between levels within an attribute tells you how much it matters which level you offer. A large utility gap between price levels indicates high price sensitivity. A small gap suggests customers are relatively indifferent to price within that range.

Market simulations are where the output becomes most directly useful for commercial decisions. Using the part-worth data, you can model any combination of attributes and estimate the share of preference it would capture against a defined competitive set. This allows you to test product configurations, pricing scenarios, and competitive responses before committing to them.

One caution: share of preference in a conjoint simulation is not the same as market share. It assumes that all the products in the simulation are equally available and equally visible to all respondents. Real markets are messier than that. Distribution, brand awareness, and shelf presence all affect actual purchase behaviour in ways that conjoint does not capture. Treat the simulation outputs as directionally correct estimates, not precise forecasts.

The Limitations Worth Being Honest About

Conjoint analysis is a genuinely useful method, but it is not magic, and treating it as the final word on customer preference is a mistake I have seen made more than once.

The method assumes that the attributes you include are the right ones. If the real driver of purchase behaviour in your category is something you did not think to test, the conjoint will not surface it. This is why the qualitative groundwork matters so much.

Conjoint also struggles with attributes that are difficult to describe in a survey context. Taste, texture, brand experience, and emotional resonance are hard to operationalise as levels in a choice task. Categories where these factors dominate, such as luxury goods or experiential services, tend to produce conjoint data that underestimates their importance relative to more tangible attributes.

Social desirability bias can still creep in. Respondents know they are being observed, and in categories where there is a normatively correct answer (sustainability credentials, ethical sourcing, accessibility features), stated preferences in conjoint studies tend to overstate actual willingness to pay for those attributes. The gap between conjoint preference and real purchase behaviour is consistently larger in these categories than in more neutral ones.

Finally, conjoint is a snapshot. It captures preferences at a point in time, in a market context that will change. Running conjoint once and treating the output as a permanent truth about your customers is a mistake. Markets shift, competitors move, and customer expectations evolve. The findings have a shelf life.

None of these limitations mean conjoint is not worth doing. They mean it should be used as one input into a broader research and intelligence process, not as a substitute for one. If you are building that kind of structured approach to market intelligence, the Market Research and Competitive Intel hub covers the full range of methods and how they fit together.

Conjoint Analysis in Practice: What Good Looks Like

The best conjoint projects I have been involved in share a few characteristics that are worth naming.

They start with a clear commercial question. Not “let’s understand our customers better” but “we are deciding between three pricing models and we need to know which one maximises revenue without destroying volume.” The specificity of the question shapes every design decision that follows.

They involve the right stakeholders in the design phase. Product, commercial, and marketing people all have different hypotheses about what drives customer choice. Getting those hypotheses on the table before the study is designed ensures the research actually tests the things the business needs to know, rather than the things the research team assumed were important.

They are followed by a clear decision. The output of a conjoint study should change something: a price point, a feature roadmap, a positioning decision, a go-to-market configuration. If the research is commissioned, delivered, and then filed without influencing a decision, something went wrong, either in the design, the communication of findings, or the organisational culture around evidence-based decision making.

I spent a period judging the Effie Awards, where effectiveness of marketing is the sole criterion. The campaigns that stood out were almost always built on a clear understanding of what customers actually valued, not what they said they valued. Conjoint is one of the more reliable tools for closing that gap. The ones that struggled were typically built on assumptions that sounded plausible but had never been tested against real trade-off behaviour.

There is a version of this that applies to digital marketing too. Understanding what your audience actually responds to, rather than what you assume they care about, is the foundation of effective digital advertising. Semrush’s overview of digital advertising covers the channel landscape, but the strategic inputs, including what messages and offers will resonate, come from research like this.

Similarly, when you are thinking about how to communicate your product’s value across content and social channels, understanding the attribute hierarchy from your conjoint data tells you what to lead with. If your conjoint shows that price is not the primary driver in your category but quality assurance is, that is a content and messaging signal as much as a product signal. Buffer’s research on LinkedIn posting frequency is a useful tactical reference, but the strategic question of what to post is answered by understanding what your audience values, which is exactly what conjoint is built to reveal.

The method is not new. It has been used in academic and commercial research for decades. What has changed is the accessibility of the tooling. Platforms like Qualtrics, Sawtooth Software, and a growing number of specialist research tools have made it possible to design and run a reasonably sophisticated conjoint study without a PhD in psychometrics. That is genuinely useful, but it also means more studies are being run by people who do not fully understand the design constraints, which is where the quality problems creep in.

If you are commissioning conjoint research from an agency or research supplier, ask them to walk you through the attribute and level selection rationale before fieldwork begins. If they cannot explain why they chose those attributes and those levels in plain commercial terms, the study design probably needs more work.

And if you are building the capability in-house, start with a simpler design than you think you need. A well-executed four-attribute CBC study will produce more useful output than a poorly executed eight-attribute study with ambitious segmentation goals. Complexity is not a proxy for rigour.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is conjoint analysis used for in marketing?
Conjoint analysis is used to quantify how customers make trade-offs between product features, price, and other attributes when making purchase decisions. In marketing, it is most commonly applied to pricing decisions, feature prioritisation, positioning strategy, and competitive scenario modelling. It produces more reliable data than direct preference questions because it forces respondents to make realistic choices rather than rating everything as important.
How is conjoint analysis different from a standard customer survey?
A standard survey asks customers to rate or rank features in isolation, which consistently produces inflated importance scores because there is no cost to saying everything matters. Conjoint analysis presents customers with realistic choice tasks between complete product profiles, forcing them to make the same kind of trade-offs they face in actual purchase decisions. The preferences inferred from these choices are substantially more predictive of real behaviour than stated preferences from direct questions.
What is choice-based conjoint and why is it the most common format?
Choice-based conjoint (CBC) presents respondents with sets of two or more product profiles and asks them to choose their preferred option, mirroring how purchase decisions happen in real retail or online environments. It is the most widely used conjoint format because the choice task is intuitive for respondents, the statistical methods (particularly hierarchical Bayes estimation) allow individual-level preference modelling even from moderate sample sizes, and the output supports market simulation directly.
How many respondents do you need for a conjoint study?
Sample size requirements depend on the study design, the number of attributes and levels, and whether you need segment-level analysis. As a general guideline, a choice-based conjoint study with four to six attributes typically requires a minimum of 150 to 200 respondents for reliable aggregate results. If you want to analyse sub-groups, such as different customer segments or geographies, you need sufficient respondents within each sub-group to produce stable estimates, which usually means 100 or more per segment.
What are the main limitations of conjoint analysis?
Conjoint analysis only tests the attributes you include, so if the real driver of purchase behaviour was not in the study design, the method will not surface it. It also struggles with attributes that are difficult to describe in a survey context, such as taste or emotional brand experience, and tends to overstate willingness to pay for socially desirable attributes like sustainability. Market simulations from conjoint data estimate share of preference, not actual market share, because they do not account for distribution, visibility, or awareness differences between products. The findings also reflect preferences at a point in time and have a limited shelf life as markets evolve.

Similar Posts