Conjoint Analysis: What Customers Value vs. What They Say They Do

Conjoint analysis is a market research method that measures how people make trade-offs between product features, revealing which attributes genuinely drive purchase decisions rather than which ones respondents claim to care about. It works by presenting people with realistic choices between different product configurations and inferring preference weights from the pattern of decisions they make.

The reason it matters commercially is simple: customers are poor reporters of their own motivations. Ask someone what they want in a product and they will tell you everything. Ask them to choose between two real options with real constraints and you learn what they actually value. That gap between stated preference and revealed preference is where most product and pricing decisions go wrong.

Key Takeaways

  • Conjoint analysis reveals trade-off preferences rather than asking people to rate features in isolation, which produces far more commercially reliable data.
  • The method has three main variants: choice-based conjoint, adaptive conjoint, and max-diff. Each suits different research objectives and budget levels.
  • Willingness-to-pay estimates derived from conjoint are more actionable than price sensitivity surveys, but they still require commercial calibration before becoming pricing decisions.
  • The quality of conjoint output depends almost entirely on how attributes and levels are defined upfront. Poor stimulus design produces precise but misleading results.
  • Conjoint works best as one input into a broader research and commercial framework, not as a standalone oracle for product or pricing strategy.

Why Traditional Feature Research Produces Unreliable Data

The standard approach to understanding what customers want is to ask them directly. Run a survey. Present a list of features. Ask respondents to rate each one on a five-point scale from “not important” to “extremely important.” Tabulate the results. Present the findings in a deck with a bar chart showing that customers value quality, reliability, and price above all else.

This approach has a fundamental flaw. When you ask people to rate features in isolation, they rate everything highly because there is no cost to doing so. Nobody ticks “not important” next to product reliability. Nobody says price does not matter. The result is a dataset in which everything appears important, which means nothing is actually differentiated, and the research cannot tell you which features to prioritise, which to cut, or how much customers would actually pay for any of them.

I have sat through dozens of research readouts built on this kind of data. The findings are always the same: customers want high quality at a low price with excellent service and fast delivery. True, technically. Completely useless for decision-making. The moment you have to make a real product trade-off, the data offers nothing.

Conjoint sidesteps this problem by forcing respondents to behave the way they behave in real markets. Instead of rating features, they choose between complete product profiles. To prefer one profile over another, they have to implicitly sacrifice something. That sacrifice is where the real preference data lives.

If you are building a broader picture of how customers make decisions, conjoint sits within a wider research and intelligence discipline. The Market Research and Competitive Intel hub covers the full range of methods, from audience segmentation to competitive positioning, that conjoint analysis feeds into.

How Conjoint Analysis Actually Works

The mechanics are straightforward once you understand the logic. You define a set of product attributes and the levels each attribute can take. For a streaming service, attributes might include price, content library size, download capability, and simultaneous streams. Each attribute has a range of levels: price might range from £5 to £15 per month in three increments, content library from “standard” to “premium” to “extensive,” and so on.

The research tool then generates a series of product profiles by combining these attributes and levels according to an experimental design. Respondents see pairs or sets of profiles and choose their preferred option. They do this repeatedly across many choice tasks. The pattern of their choices, processed through statistical modelling, produces a set of utility scores called part-worths. These scores represent how much each attribute level contributes to overall product preference.

From part-worths, you can calculate several commercially useful outputs. Relative importance scores tell you which attributes drive the most preference variation across your respondent sample. Willingness-to-pay estimates, derived by comparing the utility of a feature against the utility of price levels, give you a monetary value for each attribute. Market simulation models let you test how different product configurations would perform against competitive alternatives.

The statistical engine running underneath this is typically a hierarchical Bayesian model, which estimates preference parameters at the individual respondent level before aggregating them. This matters because it means you can segment the output by respondent characteristics and identify groups with meaningfully different preference structures, which is often where the most commercially interesting findings sit.

The Three Main Variants and When to Use Each

Conjoint is not a single method. There are three variants that get used in commercial practice, and choosing the wrong one for your research objective will cost you either money, data quality, or both.

Choice-based conjoint (CBC) is the most widely used format. Respondents see sets of three or four product profiles and choose their preferred option, sometimes with a “none of the above” option included. It closely mirrors how people actually shop, which makes the data ecologically valid. It handles price attributes well, making it the default choice for pricing research. The main limitation is that it requires a reasonably large sample, typically 200 respondents as a minimum, to produce stable estimates at the segment level.

Adaptive conjoint analysis (ACA) adjusts which choice tasks a respondent sees based on their earlier answers. If the system detects that a respondent cares little about a particular attribute, it stops asking questions about it and focuses on the attributes that appear to matter. This makes the survey shorter and more engaging, and it allows you to test a larger number of attributes than CBC can handle efficiently. The trade-off is that it is less suited to price sensitivity measurement and the adaptive mechanism can sometimes introduce bias if the early screening questions are poorly designed.

Maximum difference scaling (max-diff) is technically a variant of conjoint thinking rather than conjoint proper, but it gets grouped here because it solves a similar problem. Respondents see sets of items and identify which they find most and least important or appealing. It is excellent for prioritising long lists of features, messages, or brand attributes where you need clear discrimination between items. It does not produce willingness-to-pay estimates, so it is not a substitute for CBC in pricing work.

In practice, the choice between these comes down to three questions: how many attributes do you need to test, do you need pricing data, and what is your sample budget? If you are testing fewer than eight attributes and need pricing insight, CBC. If you have a long list of potential features to prioritise and a limited budget, max-diff. If you need both breadth and pricing data and have the budget for a larger sample, ACA or a hybrid design.

Designing the Attributes and Levels: Where Most Projects Fail

The quality of conjoint output is almost entirely determined by decisions made before a single respondent sees the survey. Attribute and level design is where projects succeed or fail, and it is where I have seen the most expensive mistakes made.

The first error is including too many attributes. Respondents can only process a limited amount of information in each choice task. Beyond eight or nine attributes, cognitive load degrades data quality as people start applying shortcuts rather than genuinely evaluating each profile. If your product has fifteen potentially relevant features, you need to do the work of reducing that list before the conjoint, not assume the conjoint will sort it out for you. Max-diff is useful here as a pre-screening tool.

The second error is defining levels that are not credible or not distinguishable. If your price levels are £10, £11, and £12 for a product that typically sells for £200, respondents will not treat the variation as meaningful and the price sensitivity data will be garbage. Levels need to span the realistic range of what the market could plausibly offer, not the narrow range of what you are currently considering internally.

The third error is writing attribute descriptions that are not equivalent in length or framing. If one attribute is described in two words and another in two sentences, respondents will weight them differently based on how much text they see, not based on the underlying content. Every attribute description should be roughly equivalent in length and written in plain, neutral language.

I ran a conjoint project early in my career for a financial services client who insisted on including an attribute called “brand heritage.” They wrote a three-sentence description of what that meant. Every other attribute was a single line. The heritage attribute came out as the most important driver in the study. It was not because customers genuinely valued heritage more than price. It was because the description was three times longer than everything else. We had to redo the fieldwork. The lesson stayed with me: stimulus design is not a detail. It is the whole game.

Using Conjoint for Pricing Decisions

Pricing is where conjoint analysis delivers its highest commercial value, and also where it is most frequently misapplied. The method produces willingness-to-pay estimates that feel precise and authoritative. That precision can be misleading if you treat the output as a direct instruction rather than an informed starting point.

Conjoint-derived willingness-to-pay tells you how much utility loss respondents are willing to accept in the form of a price increase in order to gain a particular feature. It is a relative measure, not an absolute one. If the model says customers are willing to pay an additional £8 per month for offline download capability, that does not mean you should price it at £8. It means that in the context of the other attributes tested, offline downloads were valued at approximately that level by the respondent sample at the time of the survey.

Several factors compress real-world willingness-to-pay below conjoint estimates. Respondents in a survey face no real financial consequence for their choices. Switching costs, inertia, and competitor pricing all apply in the market but not in the research environment. The gap between stated and revealed willingness-to-pay tends to be meaningful, particularly for premium features in competitive categories.

The right way to use conjoint pricing data is as one input into a pricing decision alongside competitive benchmarking, cost structure analysis, and commercial modelling. It tells you the shape of the value curve and the relative importance of different price points. It does not replace the commercial judgement required to set an actual price.

One thing conjoint does exceptionally well in pricing contexts is identify price cliffs: points in the price range where preference drops sharply rather than gradually. These are not always obvious from competitive data alone, and finding them before you launch is considerably cheaper than discovering them through a failed pricing experiment in market.

Market Simulation: The Most Underused Output

Most conjoint projects produce part-worth utilities and importance scores, present them in a deck, and stop there. The market simulation capability, which is arguably the most commercially useful output the method produces, gets skipped because it requires more time and a clearer brief about what scenarios to model.

A market simulator takes the utility estimates from your conjoint model and lets you test hypothetical product configurations against a defined competitive set. You define what the current market looks like in terms of product profiles, enter your proposed new configuration, and the model estimates what share of preference your product would attract. Change a feature, adjust a price point, or alter a competitor’s assumed offering, and the simulation updates accordingly.

This is where conjoint moves from being a research exercise to being a strategic planning tool. You can test whether adding a feature at a given price point generates enough incremental preference to justify the development cost. You can model how a competitor price cut would affect your share of preference and at what point you would need to respond. You can identify which product configuration maximises preference among a specific segment rather than the total sample.

The simulation is not a demand forecast. It models share of preference among the respondent sample, which is a different thing from market share in the real world. But as a directional tool for evaluating product and pricing scenarios before committing resources, it is more rigorous than most of the alternatives organisations use, which often amount to internal debate and gut feel dressed up as strategy.

When I was growing an agency from around 20 people to over 100, one of the disciplines I tried to build into commercial decisions was the habit of testing scenarios before committing. Not always with conjoint, the method is expensive and not always appropriate, but with some structured framework for understanding trade-offs before spending money. The instinct behind conjoint simulation is sound regardless of whether you are running a formal study: model the scenarios, understand the trade-offs, then decide.

Segmentation from Conjoint Data

Aggregate conjoint results describe the average respondent, who often does not exist in your actual customer base. The more commercially interesting work happens when you segment the preference data and identify groups with meaningfully different value structures.

There are two approaches to conjoint segmentation. The first is a priori segmentation, where you split the sample by pre-defined characteristics, demographic groups, usage segments, or customer versus prospect status, and compare the part-worth utilities across groups. This is straightforward and interpretable, and it is usually the right starting point.

The second is latent class segmentation, where you use the utility estimates themselves to identify clusters of respondents with similar preference structures, without pre-defining what those clusters should look like. This can surface segments that are not visible through demographic cuts alone. A segment defined by price sensitivity is often more commercially useful than one defined by age bracket, because it maps more directly to how you would differentiate your offer.

The practical application of conjoint segmentation is in product portfolio decisions and targeting strategy. If you identify two segments with genuinely different preference structures, one that values premium features and is relatively price-insensitive, and another that is highly price-driven and indifferent to feature differentiation, you have the basis for a tiered product architecture. You also have a targeting brief: find more of the first segment if margin is your priority, or find more of the second if volume is.

What you should not do is over-segment. Conjoint data can be sliced many ways, and with enough cuts you will always find interesting-looking differences. The test of a useful segment is whether it is large enough to act on, accessible through your existing channels, and stable enough to build a product or pricing strategy around. Segments that pass those tests are rare. Most conjoint segmentation work produces two or three actionable groups, not eight.

What Conjoint Cannot Tell You

Conjoint analysis has real limits, and understanding them is as important as understanding the method itself. The research community has a tendency to present tools as more complete than they are, and conjoint is no exception.

The method cannot account for attributes that respondents are not aware of or cannot evaluate in a survey context. If a product benefit is experiential, meaning it only becomes apparent after extended use, conjoint will undervalue it because respondents cannot accurately anticipate how much they will care about it. Durability, reliability over time, and quality of customer service are all systematically underweighted in conjoint studies relative to their actual impact on satisfaction and retention.

It also cannot capture the effect of brand on choice behaviour with much precision. You can include brand as an attribute, but the way brand operates in real purchase decisions, through familiarity, trust, and associations built over years, is not well represented by a label in a survey profile. Conjoint studies that treat brand as just another attribute tend to underestimate its influence, particularly in categories where brand carries significant risk-reduction value.

The method assumes that the attributes you include are the relevant ones. If there is a feature or dimension of value that you did not think to include in the design, the model will not surface it. This is why qualitative research before conjoint is not optional. You need to understand the category from the customer’s perspective before you can design a conjoint study that reflects how they actually think about it. Skipping the qualitative phase to save budget is one of the most reliable ways to produce conjoint findings that are precise, expensive, and wrong.

Having spent time judging the Effie Awards and reviewing the research that sits behind effective marketing campaigns, one pattern I noticed repeatedly was that the strongest work was grounded in genuine customer insight, not just survey data. Conjoint can be part of that foundation, but only if it is designed around how customers actually think about a category, not how an internal team assumes they think about it.

There is a useful piece from Copyblogger on the danger of over-relying on borrowed frameworks that applies here too. Conjoint is a framework, and like any framework, it reflects the assumptions built into it. Use it to sharpen your thinking, not to replace it.

Running a Conjoint Study: The Practical Process

If you are commissioning a conjoint study for the first time, the process has five stages that need to be executed in sequence. Rushing any of them degrades the output of everything that follows.

Stage one: qualitative grounding. Before defining attributes and levels, run exploratory qualitative research with your target audience. The goal is to understand the language they use to describe the category, the trade-offs they are already making, and the attributes that matter to them but might not be obvious from internal discussion. This can be focus groups, depth interviews, or ethnographic observation depending on the category and budget.

Stage two: attribute and level design. Translate the qualitative findings into a defined set of attributes and levels. Apply the design principles discussed earlier: limit to eight attributes or fewer, ensure levels span the credible market range, write descriptions of equivalent length and neutrality. Have someone outside the project team review the stimulus before fieldwork. Fresh eyes catch framing problems that familiarity blinds you to.

Stage three: survey design and piloting. Build the choice tasks according to an appropriate experimental design, typically a D-efficient design generated by the survey software. Run a pilot with 20 to 30 respondents before full fieldwork. Check completion rates, time-on-task, and whether any choice tasks are producing near-unanimous responses, which suggests the levels are not creating genuine trade-offs.

Stage four: fieldwork and data quality. Recruit respondents from your actual target population, not a generic panel. Apply quality controls: remove respondents who complete the survey in implausibly short times, check for straight-lining in any rating questions, and examine whether individual-level utility estimates are internally consistent. Poor-quality respondents in conjoint data are more damaging than in standard surveys because the modelling amplifies individual-level noise.

Stage five: analysis and commercial translation. Run the utility estimation, produce importance scores, and build the market simulator. Then, critically, translate the statistical output into commercial decisions. Part-worth utilities are not a deliverable. The deliverable is a recommendation about which product configuration to build, which price point to test, or which segment to prioritise, supported by the data. If the analysis team cannot make that translation, the research budget has been spent on interesting numbers rather than useful insight.

Tools like Optimizely’s digital experience resources are a reminder that the technology available for testing and personalisation has advanced considerably, but the underlying question of what customers value has not changed. Conjoint addresses that question more rigorously than most alternatives.

For a broader view of how conjoint fits within the full landscape of market intelligence methods, the Market Research and Competitive Intel hub covers everything from customer segmentation to competitive analysis frameworks. Conjoint is a powerful tool in that toolkit, but it works best when it is connected to a wider research strategy rather than run in isolation.

When Conjoint Is Worth the Investment and When It Is Not

Conjoint analysis is not cheap. A well-designed study with adequate sample size, proper qualitative groundwork, and competent analysis will cost meaningful money. Before commissioning one, the question worth asking is whether the decision you are trying to inform is worth the cost of getting it right.

Conjoint is worth the investment when you are making a product architecture decision that will be costly to reverse, when you are setting a pricing structure for a new category or a significant repositioning, when you are allocating development resources across a large feature backlog and need a defensible prioritisation framework, or when you are entering a new market and need to understand how local preference structures differ from markets you know.

It is probably not worth the investment when you are testing minor variations of an existing product, when the decision will be made on financial grounds regardless of customer preference data, when you do not have the organisational capability to act on the findings, or when the category is so new that customers lack the reference points needed to make meaningful trade-offs in a survey context.

I have seen conjoint studies commissioned as a way of creating the appearance of rigour around a decision that had already been made internally. The research was used to validate a preferred answer rather than to genuinely inform the choice. That is an expensive way to produce a slide that says “research confirms our view.” If the decision is already made, save the budget. If it is genuinely open, conjoint can be one of the most commercially valuable research investments available.

The broader principle applies to research spending generally. Measurement and research budgets should be proportional to the value of the decisions they inform, and the decisions should actually be open to being changed by the findings. Both conditions need to be true for the investment to make sense. For a perspective on how Forrester approaches the value of integrated data, the underlying logic is similar: integration and rigour only pay off if the organisation is equipped to act on what the data reveals.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is conjoint analysis used for in marketing?
Conjoint analysis is used to understand how customers make trade-offs between product features, and to quantify how much each feature contributes to purchase preference. In marketing, it is most commonly applied to product design decisions, pricing strategy, and feature prioritisation. It produces more reliable data than direct preference surveys because it forces respondents to make realistic choices rather than rating everything in isolation.
How large a sample do you need for conjoint analysis?
For choice-based conjoint, a minimum of 150 to 200 respondents is typically required to produce stable aggregate estimates. If you intend to segment the data, you need enough respondents within each segment to support reliable estimates, usually at least 100 per segment. Adaptive conjoint can work with smaller samples because the design is more efficient, but the trade-off is reduced suitability for price sensitivity measurement.
What is the difference between conjoint analysis and a standard survey?
A standard survey asks respondents to rate or rank features directly, which produces data inflated by social desirability and the absence of real trade-offs. Conjoint analysis presents complete product profiles and asks respondents to choose between them, which forces implicit prioritisation. The statistical modelling then infers how much each feature contributed to those choices. The result is preference data that more closely reflects how people behave in real purchase situations.
Can conjoint analysis predict actual market share?
Conjoint market simulators estimate share of preference among the respondent sample, which is directionally useful but not equivalent to a market share forecast. Real market share is affected by distribution, brand awareness, in-store placement, and competitive dynamics that conjoint does not capture. The simulation is best used as a relative tool for comparing product configurations and pricing scenarios, not as an absolute prediction of commercial performance.
How much does conjoint analysis cost to commission?
The cost varies considerably depending on the complexity of the design, the size of the sample, the recruitment cost for the target audience, and the depth of analysis required. A straightforward choice-based conjoint study with a general consumer sample might cost from £15,000 to £30,000. Studies involving specialist audiences, large samples, or extensive market simulation work can cost significantly more. The cost should be weighed against the value of the decision being informed, not against a fixed research budget.

Similar Posts