Conversion Rate Optimisation Services: How to Choose One That Delivers
Conversion rate optimisation services help businesses systematically increase the percentage of visitors who take a desired action, whether that is making a purchase, submitting a form, or requesting a quote. Done well, CRO is one of the highest-return investments a business can make because it extracts more value from traffic you are already paying for. Done poorly, it produces a portfolio of inconclusive tests and a lot of activity that never touches the bottom line.
The difference between those two outcomes usually comes down to how the service is structured, what it actually measures, and whether the people running it are accountable to commercial results or just to process.
Key Takeaways
- Most CRO services are sold as a methodology, but the ones that deliver are accountable to revenue outcomes, not test volume.
- Choosing the right provider requires understanding whether they diagnose before they prescribe, not whether they have the most impressive tech stack.
- The scope of a CRO engagement matters as much as the quality of it: a service that only touches landing pages while ignoring checkout or email flows will leave most of the value on the table.
- Pricing models for CRO services signal a lot about incentive alignment. Retainers tied to activity, not results, tend to drift toward comfortable mediocrity.
- The best CRO providers treat your analytics as a starting point for questions, not a source of answers. That distinction changes everything about how they work.
In This Article
- What Do Conversion Rate Optimisation Services Actually Include?
- How Should You Evaluate a CRO Provider Before Signing?
- What Pricing Models Are Common, and Which Signal Better Incentive Alignment?
- What Scope Should a CRO Engagement Cover?
- How Do You Know If a CRO Service Is Actually Working?
- What Role Does Qualitative Research Play in a CRO Service?
- What Are the Common Ways CRO Providers Underdeliver?
- How Should You Structure the Relationship Between CRO and Paid Media?
- What Does a Good CRO Service Brief Look Like?
What Do Conversion Rate Optimisation Services Actually Include?
The term gets used loosely. Some agencies use it to mean A/B testing landing pages. Others use it to mean a full programme covering user research, funnel analysis, copy testing, UX improvements, and post-purchase optimisation. These are not the same thing, and the gap between them in commercial impact is enormous.
A properly scoped CRO service should cover at minimum: conversion auditing across the full funnel, quantitative analysis of where users drop off, qualitative research to understand why, hypothesis development, test design and execution, statistical analysis of results, and implementation of winning variants. Each of those steps requires a different skill set, and most agencies are stronger in some areas than others.
When I was running an agency, one of the first things I learned to look for when evaluating any service provider was whether they could describe their process in plain English without reaching for jargon. The agencies that genuinely understood conversion optimisation could explain exactly what they would do in week one, what they would measure, and what decision they would make based on the results. The ones who could not do that were usually selling a methodology rather than a capability.
If you want a broader view of what conversion optimisation involves as a discipline, the CRO and Testing hub on The Marketing Juice covers the full landscape, from testing frameworks to funnel strategy and beyond.
How Should You Evaluate a CRO Provider Before Signing?
The pitch process for CRO services is notoriously easy to game. Providers show you a deck full of uplift percentages, case studies with impressive numbers, and a proprietary process with a name that sounds like it was designed by a branding agency. None of that tells you whether they will actually improve your conversion rate.
The questions worth asking are more specific. Ask them to walk you through a test that did not produce a winner and what they did next. Ask how they decide which pages or flows to prioritise when resources are limited. Ask what happens when test results are statistically ambiguous. Ask who specifically will be working on your account and what their background is.
The answers to those questions tell you far more than a case study. A provider who has run a serious CRO programme will have a considered answer to all of them. A provider who is essentially selling project management with some testing tools bolted on will struggle with the first one.
It is also worth asking about their relationship with your analytics data. Sound CRO practice treats data as a set of hypotheses to investigate, not a set of conclusions to act on. If a provider’s first move is to open your Google Analytics and start telling you what to fix, without any qualitative research, without talking to your customers, without understanding the context behind the numbers, that is a signal worth paying attention to.
What Pricing Models Are Common, and Which Signal Better Incentive Alignment?
CRO services are typically priced in one of three ways: a fixed monthly retainer, a project fee for a defined scope of work, or a performance-based arrangement tied to measurable outcomes. Each has trade-offs.
Monthly retainers are the most common model and the most prone to drift. Without clear deliverables and defined success metrics, a retainer can easily become a standing order for activity rather than results. The agency runs tests, produces reports, holds weekly calls, and the relationship continues regardless of whether the conversion rate is moving. I have seen this pattern play out across multiple client relationships over the years, and it almost always comes down to the same root cause: the retainer was scoped around inputs rather than outputs.
Project-based engagements work well for specific, bounded problems: a checkout redesign, a landing page testing programme for a product launch, a post-purchase flow audit. They are less suited to ongoing optimisation because CRO is iterative by nature and the most valuable insights often emerge from the second or third testing cycle, not the first.
Performance-based pricing sounds attractive in theory. In practice, it creates measurement disputes and attribution arguments that can poison a client relationship quickly. Conversion rates are affected by factors outside any agency’s control: seasonality, media spend changes, product updates, competitor activity. Tying fees directly to conversion rate movement puts both parties in an uncomfortable position when those external factors shift.
The model that tends to work best is a retainer with clearly defined deliverables, agreed test velocity, and quarterly reviews against commercial outcomes. That structure keeps the agency accountable without creating perverse incentives around attribution.
What Scope Should a CRO Engagement Cover?
One of the most common mistakes businesses make when commissioning CRO services is scoping the engagement too narrowly. They hire someone to optimise landing pages and then wonder why the overall conversion rate is not moving, because the landing page was not actually the problem.
A useful way to think about scope is to map it against the full conversion funnel, from the first point of contact through to the post-purchase experience. Most businesses have conversion problems at multiple points in that funnel, and fixing one without addressing the others produces marginal gains at best.
For an ecommerce business, a properly scoped CRO programme would typically cover: paid and organic landing pages, category and product pages, the cart and checkout flow, email sequences triggered by abandonment or browsing behaviour, and the post-purchase experience including upsell and retention mechanics. Each of those touchpoints has its own conversion dynamics, and the cumulative effect of improving all of them is substantially larger than the sum of individual page-level improvements.
For a B2B business with a longer sales cycle, the scope looks different. The relevant conversion points might include content downloads, demo requests, email nurture sequences, and sales enablement assets. The testing methodology is also different because traffic volumes are lower and statistical significance takes longer to achieve.
When I was growing the agency from around 20 people to close to 100, one of the things that distinguished our best client relationships from the average ones was that we always started with a funnel audit before proposing any specific work. Not because it was good process theatre, but because it consistently revealed that the problem the client thought they had was rarely the actual problem. The business that thought it had a landing page problem often had a checkout problem. The business that thought it had a traffic quality problem often had a product page problem. Starting with the audit meant we were solving the right thing.
How Do You Know If a CRO Service Is Actually Working?
This is where a lot of CRO engagements fall apart. The agency produces a steady stream of test results, some winners, some inconclusive, some losses. The reports look thorough. But the overall conversion rate, measured at the business level rather than the page level, is not moving in any meaningful way.
There are a few reasons this happens. One is that the tests being run are too small to matter even if they win. Optimising button colour or headline phrasing on a page that accounts for 3% of your traffic will not shift the needle regardless of the result. Another is that winning variants are not being implemented properly or at all, because the CRO team and the development team are not aligned. A third is that the testing programme is optimising for micro-conversions that do not correlate with the macro-conversion that actually drives revenue.
The right way to measure a CRO engagement is against a small number of commercial metrics agreed at the outset: overall site conversion rate, revenue per visitor, and, where relevant, average order value or lead quality. Those metrics should be tracked over a rolling period long enough to smooth out seasonal variation, typically 90 days minimum.
It is also worth being clear about what good looks like before the engagement starts. The right approach to CRO involves setting realistic expectations about the pace of improvement, not promising dramatic uplifts in the first quarter. Genuine, durable conversion rate improvement is usually a 12 to 18 month programme, not a 90-day sprint. Providers who promise otherwise are either working with businesses that have unusually large optimisation opportunities or they are setting you up for disappointment.
What Role Does Qualitative Research Play in a CRO Service?
Most CRO services lead with quantitative data because it is easier to present and easier to act on. Heatmaps, session recordings, funnel drop-off rates, click maps: these are all useful, but they tell you where people are leaving, not why. The why is where the real insight lives, and getting to it requires qualitative research.
Qualitative methods in CRO include user interviews, on-site surveys, customer feedback analysis, and usability testing. None of these are glamorous. User interviews in particular are time-consuming and require skill to run well. But they consistently produce hypotheses that quantitative data alone would never surface.
I judged the Effie Awards for a period, and one of the things that stood out across the entries that demonstrated genuine commercial effectiveness was that the best campaigns were almost always built on a sharp insight about customer behaviour that came from somewhere other than the analytics dashboard. The analytics told the team something was not working. The qualitative research told them why. That distinction matters as much in CRO as it does in brand strategy.
When evaluating a CRO provider, ask specifically how qualitative research is built into their process and who conducts it. If the answer is that they use heatmaps and session recordings and call that qualitative research, that is a meaningful gap in their methodology. Heatmaps are quantitative tools with a visual output. They are not a substitute for talking to customers.
The Moz CRO playbook covers this well, outlining how qualitative and quantitative methods work together rather than in isolation. It is a useful reference point if you are building out a brief for a CRO engagement and want to make sure the scope includes both.
What Are the Common Ways CRO Providers Underdeliver?
Having worked with and evaluated a large number of specialist providers across my career, the failure modes in CRO services are fairly consistent. They tend to cluster around a handful of patterns.
The first is testing too many things simultaneously without enough traffic to reach significance on any of them. This produces a lot of inconclusive results and creates the illusion of activity without generating usable insight. A disciplined CRO programme prioritises ruthlessly and accepts that fewer, better-designed tests produce more value than a high volume of underpowered ones.
The second is optimising for the wrong metric. This is more common than it should be. A provider who is optimising for click-through rate on a product page might increase clicks to the cart while decreasing the quality of those clicks, resulting in higher abandonment downstream. Optimising any single metric in isolation without understanding its relationship to the metrics that follow it is a recipe for local maxima that do not translate to commercial improvement.
The third is a failure to segment. Aggregate conversion rates are averages, and averages hide the variation that matters. A business with an overall conversion rate of 2.5% might have a mobile conversion rate of 1.1% and a desktop conversion rate of 4.2%. The intervention needed for each is completely different, and a CRO programme that treats them the same will underperform on both. Understanding how different user segments move through the funnel is foundational to building a testing programme that actually moves the numbers.
The fourth, and perhaps the most commercially damaging, is a failure to implement winners. A test produces a clear result. The variant is agreed. And then it sits in a backlog for three months while the development team works through other priorities. This is not a CRO problem, it is an organisational problem, but a good CRO provider will surface it early and help you build the internal alignment needed to act on test results at pace.
How Should You Structure the Relationship Between CRO and Paid Media?
CRO and paid media are more tightly connected than most businesses treat them. The traffic that paid media sends to your site is the raw material that CRO works with. The quality of that traffic, its intent, its expectations, its prior exposure to your brand, shapes what is possible on the conversion side. And the improvements that CRO produces directly affect the efficiency of your paid media spend, because a higher conversion rate means a lower cost per acquisition from the same budget.
Despite this, most businesses manage CRO and paid media as separate workstreams with separate agencies and separate reporting lines. The result is that each team optimises in isolation. The paid media team improves click-through rate without knowing that the landing page experience is creating a drop-off. The CRO team improves the landing page without knowing that the paid media targeting has shifted and the audience arriving is different from the one the page was designed for.
The practical fix is straightforward: a shared briefing process, shared reporting on the metrics that sit at the intersection of both disciplines, and a regular cadence where both teams review performance together. It does not require structural changes or a single agency managing both. It requires communication and a shared definition of success.
Understanding how click rate and click-through rate interact with downstream conversion metrics is part of building that shared picture. A strong CTR that collapses at the landing page is a message alignment problem, not a conversion problem. Treating it as a conversion problem wastes resources and misses the actual fix.
What Does a Good CRO Service Brief Look Like?
Most briefs for CRO services are too thin. They describe the business at a high level, list the pages or flows they want to improve, and ask for a proposal. What they do not include is the information a good provider needs to design a programme that will actually work.
A strong brief for a CRO engagement should include: current conversion rates by channel, device, and audience segment; the commercial value of a one percentage point improvement in conversion rate; the testing tools already in place and the development capacity available to implement changes; any previous testing that has been done and what it found; and the internal stakeholders who will need to approve changes before they go live.
That last point matters more than most businesses realise. I have seen CRO programmes stall not because the testing was poor but because there was no clear decision-making authority for implementing results. Every winning variant went through a three-week approval process involving four departments, by which point the seasonal context had changed and the result was no longer relevant. Building the internal process for acting on CRO results is as important as the testing programme itself.
A detailed brief also helps you evaluate proposals more accurately. When every provider is responding to the same specific information, the quality differences between their approaches become visible in a way that generic pitches do not reveal.
The broader principles of what makes CRO work as a discipline, not just as a service, are covered in depth across the conversion optimisation section of The Marketing Juice. If you are building a brief or evaluating a provider, it is worth reading through the full picture before you start.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
