Conversion Optimisation: Stop Testing Your Way to Mediocrity

Conversion optimisation is the discipline of systematically improving the percentage of visitors who complete a desired action, whether that’s a purchase, a sign-up, a quote request, or any other commercially meaningful outcome. Done well, it compounds the value of every pound you spend on traffic. Done poorly, it produces a library of inconclusive tests and a false sense of progress.

Most CRO programmes fall into the second category. Not because the tools are bad or the practitioners are incompetent, but because the underlying logic is flawed from the start. Testing is not a strategy. It’s a mechanism. Without a clear diagnosis of why conversion is failing, you’re just running experiments in the dark and hoping something lands.

Key Takeaways

  • Testing without diagnosis produces activity, not improvement. The experiment queue should follow the insight, not replace it.
  • Most conversion problems are not on the page. They’re upstream, in the offer, the audience targeting, or the expectation set by the ad.
  • Velocity is the wrong metric for a CRO programme. One well-diagnosed, high-confidence test beats ten shallow ones.
  • Personalisation and AI-driven optimisation tools can accelerate good CRO work, but they cannot substitute for it. A bad offer optimised faster is still a bad offer.
  • The organisations that get the most from CRO treat it as a commercial function, not a conversion rate function. The goal is revenue per visitor, not a prettier percentage.

Why “Test More” Is the Wrong Answer

There’s a version of CRO that gets sold to marketing teams as a kind of perpetual motion machine. Run enough tests, the story goes, and the wins will accumulate. Keep the testing velocity high. Build the culture of experimentation. The conversion rate will follow.

I’ve seen this play out at scale. When I was running iProspect, we had clients who were deeply invested in testing programmes. Sophisticated setups. Dedicated resource. Proper tooling. And some of them were running 30, 40 tests a quarter with almost nothing to show for it commercially. The win rate looked reasonable on paper, but the winning tests were moving decimal points, not revenue lines.

The problem wasn’t the testing. The problem was what they were testing. Button colours. Headline font weights. Form field labels. These are finishing touches on a product that hadn’t been properly diagnosed. If your offer is weak, your targeting is off, or your page is fundamentally misaligned with what your audience expects to find, no amount of micro-optimisation will save you.

This is the core tension in CRO: the things that are easiest to test are rarely the things that matter most. And the things that matter most, the offer, the positioning, the audience fit, the trust architecture of the page, are harder to isolate in a clean A/B test. So programmes default to the easy stuff and call it optimisation.

If you want a broader grounding in what conversion optimisation actually involves across the funnel, the CRO and Testing hub covers the full landscape, from diagnosis to measurement to programme design.

The Diagnosis Problem: What Are You Actually Fixing?

Before you write a test brief, you should be able to answer one question clearly: what specific behaviour are you trying to change, and why do you believe it’s happening?

Not “our conversion rate is 1.8% and the industry average is 3.2%.” That’s a symptom, not a diagnosis. Not “users are dropping off on the checkout page.” That’s an observation. A diagnosis sounds more like: “Users who arrive from paid social are abandoning the checkout at the payment step at a rate 40% higher than users from organic search. We believe this is because paid social traffic is arriving with lower purchase intent, and the checkout page provides no trust reassurance for a cold audience.”

That’s a testable hypothesis with a commercial rationale. Everything before it is data collection, not diagnosis.

Tools like Hotjar are genuinely useful here, not as a substitute for diagnosis, but as a way to observe behaviour that quantitative data can’t explain. Session recordings, heatmaps, and on-page surveys can surface friction you wouldn’t find in a funnel report. But they need to be interrogated, not just watched. The question isn’t “what are users doing?” It’s “what does this behaviour tell us about what they’re thinking?”

One of the most useful things I’ve done with a struggling ecommerce client was to sit with their customer service team for half a day and read through six months of pre-purchase enquiries. Not analytics. Not heatmaps. Just the questions real people were asking before they decided whether to buy. The patterns were obvious within an hour. Three objections kept appearing: uncertainty about delivery timescales, confusion about the returns policy, and a lack of clarity on whether a product would fit a specific use case. None of those were visible in the quantitative data. All three were addressable on the page. We fixed them before we ran a single test, and conversion improved without any experimentation at all.

The Offer Is the Optimisation

There’s an uncomfortable truth in CRO that doesn’t get enough airtime: if your offer isn’t right, the page can’t save you.

I remember a vendor pitch a few years back, a major holding company presenting an AI-driven personalised creative solution. The case study showed a 90% reduction in CPA and a claimed 3x lift in conversions. The room was impressed. I wasn’t. When I pushed on the baseline, it turned out they’d replaced genuinely poor creative with something marginally less poor, and the “AI” had mostly been doing audience segmentation that any decent media planner would have done manually. The performance improvement was real. The attribution to AI was not.

The same logic applies to CRO. If you take a weak offer and put it on a beautifully optimised page, you’ll get a slightly better conversion rate on a weak offer. The ceiling is set by the offer, not the page. This is why the most impactful CRO work often involves the product team, the pricing team, or the commercial team, not just the digital team. Changing the offer, restructuring the pricing, adding a guarantee, or repositioning the value proposition can do more for conversion than a year of page-level testing.

The Moz CRO strategy guide makes a similar point about the relationship between traffic quality and conversion potential. The page is the last step in a chain. Fixing the last step when the earlier steps are broken is rarely the most efficient use of effort.

What a Well-Structured CRO Programme Actually Looks Like

A programme worth running has three components that most programmes underinvest in: a diagnostic phase, a prioritisation framework, and a learning infrastructure. The testing itself is almost secondary.

Diagnostic phase. This is where you build the picture of where conversion is failing and why. Quantitative data tells you where. Qualitative research tells you why. Both are necessary. A funnel analysis that shows drop-off at the product page is useful. A user survey that reveals the primary reason for that drop-off is more useful. Together, they give you something to test against.

Prioritisation framework. Not all conversion problems are equal. A 15% drop-off on a page that sees 50,000 sessions a month is a different commercial problem from a 15% drop-off on a page that sees 500. Most CRO programmes prioritise by ease of testing rather than commercial impact. That’s backwards. The right framework weights potential revenue impact, confidence in the hypothesis, and ease of implementation, roughly in that order. The Crazy Egg CRO checklist is a reasonable starting point for structuring this kind of prioritisation, though any framework is better than none.

Learning infrastructure. This is where most programmes fail completely. Tests run, results come in, and the learning disappears into a spreadsheet that nobody reads six months later. A CRO programme should be building institutional knowledge about your audience: what they respond to, what creates friction, what language resonates, what proof points move them. That knowledge compounds. Test results that don’t feed into a shared understanding of the customer are just numbers.

Statistical Significance Is Not the Same as Commercial Significance

This point gets made occasionally in CRO circles, but not often enough, and not forcefully enough.

A test can reach 95% statistical significance and still be commercially irrelevant. If your variant produces a 0.2% lift in conversion rate on a page that generates 200 transactions a month, you’ve proven something statistically real and commercially meaningless. The resource spent designing, running, and analysing that test almost certainly cost more than the revenue it will ever generate.

When I was judging the Effie Awards, one of the things that struck me consistently was how rarely entries connected their optimisation work to actual business outcomes. Conversion rate improvements were cited as wins in isolation, without reference to revenue, margin, or customer lifetime value. A rising conversion rate on a low-margin product with high returns isn’t necessarily a good thing. It depends entirely on what you’re converting people into.

The right question after any CRO test is not “did the conversion rate go up?” It’s “what happened to revenue per visitor, and is that directionally consistent with our commercial goals?” Those are different questions, and they sometimes have different answers.

For ecommerce specifically, this distinction matters enormously. Optimising for add-to-cart rate is not the same as optimising for completed purchase. Optimising for completed purchase is not the same as optimising for repeat purchase. Mailchimp’s ecommerce CRO guide covers some of this nuance, particularly around the difference between acquisition conversion and retention conversion, which are genuinely different problems requiring different approaches.

Personalisation: Useful Tool, Oversold Promise

Personalisation has been positioned as the logical endpoint of CRO for the better part of a decade. Serve each visitor the version of the page most likely to convert them, and conversion rates will follow. The logic is sound. The execution is frequently not.

The problem is that effective personalisation requires three things that most organisations don’t have simultaneously: clean, reliable audience data; a content and creative operation that can produce meaningful variants at scale; and a testing infrastructure that can validate whether personalisation is actually driving the attributed improvement or just correlating with it.

What I see more often is personalisation that amounts to showing a returning visitor the same page with their name in the header, or geo-targeting that swaps the hero image based on location. These aren’t bad things. They’re just not the significant conversion lever they’re sold as. Real personalisation, the kind that meaningfully changes what a visitor sees based on where they are in the decision process and what objections they’re likely to have, requires a level of audience understanding that most organisations haven’t done the work to develop.

The same caution applies to AI-driven optimisation tools. They can accelerate good CRO work. They can automate the mechanics of multivariate testing. They can surface patterns in behaviour that would take a human analyst weeks to find. But they cannot create a good offer, they cannot fix a trust deficit, and they cannot compensate for a fundamental mismatch between what you’re selling and what your audience wants to buy. The Moz CRO playbook covers the strategic layer that technology tools sit underneath, and it’s worth reading before investing in any optimisation platform.

The Resourcing Question Nobody Asks Honestly

CRO is resource-intensive in ways that don’t always show up in the budget conversation. The tooling cost is visible. The analyst time, the developer time for implementation, the design resource for variants, the time to reach statistical significance on low-traffic pages, these are less visible and frequently underestimated.

I’ve seen organisations invest in enterprise-level testing platforms and then run four tests a year because they don’t have the development resource to implement variants without going through a six-week sprint cycle. The platform sits largely idle. The conversion rate doesn’t move. And the conclusion drawn is that CRO doesn’t work, rather than that the programme was under-resourced from the start.

If you’re thinking about building an in-house CRO capability or hiring for it, the Unbounce guide on hiring for CRO is one of the more honest pieces of writing on what the function actually requires. The short version: a good CRO practitioner is part analyst, part psychologist, part project manager, and part commercial strategist. That combination is genuinely rare, and paying for it properly is not optional if you want the programme to deliver.

The alternative, outsourcing to an agency, has its own trade-offs. Agencies bring testing velocity and cross-client pattern recognition. They don’t bring deep knowledge of your specific audience, your commercial constraints, or your internal data. The best CRO agency relationships I’ve seen work are the ones where the client brings the audience insight and commercial context, and the agency brings the testing rigour and technical capability. When the agency is expected to provide both, the output tends to be generic.

Building a CRO Programme That Compounds

The organisations that get sustained value from conversion optimisation are the ones that treat it as a long-term commercial function rather than a project with a start and end date. That means a few specific things in practice.

First, the programme needs a clear commercial mandate. Not “improve conversion rate” but “increase revenue per visitor by X% over 12 months.” That framing changes what you test, how you prioritise, and how you measure success. It also makes the programme much easier to defend in a budget conversation, because it connects directly to a number that the business cares about.

Second, the learning needs to be documented and shared. Every test, win or loss, should generate a learning that’s accessible to the broader team. Over time, this builds a picture of your audience that no analytics platform can give you, because it’s grounded in actual experiments rather than passive observation.

Third, the programme needs to be honest about what it can and cannot change. Page-level optimisation can improve the performance of good traffic on a good offer. It cannot fix bad traffic, a weak offer, or a product that doesn’t meet market needs. Knowing where the ceiling is, and being willing to say so, is a sign of a mature programme rather than a failing one.

There’s more on the commercial framing of CRO, including how to connect programme outputs to business outcomes, across the articles in the CRO and Testing hub. If you’re building or rebuilding a programme, that’s a reasonable place to work through the full picture.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is conversion optimisation and why does it matter?
Conversion optimisation is the process of systematically improving the percentage of visitors who complete a commercially meaningful action, such as a purchase, sign-up, or enquiry. It matters because it compounds the return on every pound spent on traffic acquisition. Improving conversion rate by even a fraction can have a larger commercial impact than increasing ad spend, without increasing cost.
How many A/B tests should a CRO programme run per month?
There is no universally correct number. Testing velocity matters far less than testing quality. A programme running two well-diagnosed, high-confidence tests per month will consistently outperform one running twenty shallow tests. The right number depends on your traffic volume, your development resource, and how long tests need to run to reach statistical significance. Prioritise depth of hypothesis over speed of execution.
What is a good conversion rate to aim for?
Conversion rate benchmarks vary enormously by industry, traffic source, device type, and the nature of the conversion action. A lead generation form on a high-intent search landing page will convert at a very different rate from a direct-to-consumer ecommerce checkout. Rather than chasing an industry average, focus on improving your own baseline over time and measuring success in terms of revenue per visitor rather than conversion rate in isolation.
Should conversion optimisation be handled in-house or by an agency?
Both models can work, and both have genuine trade-offs. In-house teams bring deep audience knowledge, commercial context, and continuity. Agencies bring testing infrastructure, cross-client pattern recognition, and specialist technical capability. The most effective arrangements tend to combine both: the client owns the commercial strategy and audience insight, the agency provides the testing rigour and execution. Outsourcing the strategic thinking as well as the execution rarely produces strong results.
What is the difference between statistical significance and commercial significance in CRO?
Statistical significance tells you that a test result is unlikely to be due to chance. Commercial significance tells you whether the result is large enough to matter to the business. A test can reach 95% statistical confidence and still represent a conversion lift so small that the revenue impact is negligible. Always evaluate test results in terms of their projected revenue or commercial impact, not just whether the result clears a statistical threshold.

Similar Posts