Conversion Rate Optimisation Services: What You’re Buying
Conversion rate optimisation services cover the structured process of identifying why visitors to your site or landing page don’t take the action you want, then testing and implementing changes that increase the proportion who do. Done well, it compounds the value of every pound you spend on traffic. Done poorly, it produces a library of A/B tests that nobody acts on and a dashboard full of numbers that don’t connect to revenue.
What you’re buying when you engage a CRO service, whether in-house or agency, is not a testing tool and not a report. You’re buying a repeatable system for turning behavioural data into commercial decisions. The distinction matters more than most briefs acknowledge.
Key Takeaways
- CRO services range from one-off audits to fully managed testing programmes. The right engagement model depends on your traffic volume, internal capability, and commercial ambition, not on what agencies prefer to sell.
- The value of CRO compounds over time. A 12% improvement in checkout conversion doesn’t just lift this month’s revenue. It changes the unit economics of every paid channel you run.
- Most CRO engagements underdeliver because the hypothesis quality is poor. Agencies that run lots of tests but can’t explain the reasoning behind each one are burning your budget on noise.
- Traffic volume is a hard constraint. Meaningful A/B test results require statistical significance. Sites with fewer than 10,000 monthly sessions need a different approach to CRO than high-volume e-commerce operations.
- The best CRO programmes are built on qualitative data as much as quantitative. Session recordings, user interviews, and on-site surveys often reveal more than heatmaps alone.
In This Article
- What Do CRO Services Actually Include?
- How Do You Evaluate a CRO Service Provider?
- What Should a CRO Service Cost?
- Which Parts of the Funnel Should a CRO Service Focus On?
- How Does Traffic Volume Affect What CRO Services Can Deliver?
- What Does a Mature CRO Programme Look Like in Practice?
- How Do CRO Services Interact With Paid Media?
- When Does It Make Sense to Bring CRO In-House?
- What Should You Expect in the First 90 Days of a CRO Engagement?
What Do CRO Services Actually Include?
There is no standard definition of what a CRO service delivers. That ambiguity is one of the first problems you’ll encounter when evaluating providers. Some agencies lead with technology, selling you access to a testing platform with light analytical support wrapped around it. Others lead with strategy and research, treating the testing platform as a commodity. The difference in commercial outcome between these two approaches is significant.
A properly scoped CRO engagement typically covers five distinct areas. First, a diagnostic phase: auditing your analytics setup, identifying tracking gaps, reviewing your existing funnel data, and establishing a baseline conversion rate by channel, device, and audience segment. Second, qualitative research: session recordings, heatmaps, on-page surveys, and in some cases user interviews or usability testing. Third, hypothesis development: taking the data from the first two phases and building a prioritised backlog of test ideas, each with a clear rationale and an expected commercial impact. Fourth, test design and execution: writing copy variants, briefing design changes, configuring the test correctly in your chosen platform, and running it to statistical significance. Fifth, analysis and iteration: interpreting results, documenting learnings, and feeding insights back into the hypothesis backlog.
If a service you’re evaluating doesn’t cover all five, you’re buying a partial solution. That’s not necessarily wrong, but you should know what you’re getting. An audit without a testing programme is a document. A testing programme without proper research is guesswork at scale.
For a broader view of how CRO fits into a performance programme, the CRO and Testing hub covers the full landscape, from audit methodology to testing frameworks to commercial measurement.
How Do You Evaluate a CRO Service Provider?
I’ve been on both sides of this conversation. When I was running an agency, we were regularly evaluated by clients who had been burned before. When I was on the client side, I was the one asking the difficult questions. The tells are usually the same regardless of which side of the table you’re on.
The first question worth asking any CRO provider is: “Show me a test that failed and what you learned from it.” Agencies that only show winning tests in their case studies are either cherry-picking or running so few tests that they’ve never had a meaningful failure. Neither is a good sign. A mature testing programme has a failure rate. That’s not a problem, it’s proof that the hypotheses are ambitious enough to be worth testing.
The second question: “How do you prioritise your test backlog?” If the answer is vague, be cautious. Experienced CRO teams use a scoring framework, something like PIE (Potential, Importance, Ease) or ICE (Impact, Confidence, Ease), to rank hypotheses by expected commercial value against implementation cost. The specific framework matters less than the fact that there is one. Arbitrary test sequencing is a reliable indicator of a programme that’s being run for activity rather than outcomes.
Third: “What’s your minimum traffic threshold for running a valid test?” Any provider who doesn’t immediately raise the question of statistical significance when discussing test design is not someone you want running your programme. Underpowered tests produce false positives. False positives lead to implementing changes that don’t actually improve conversion, and sometimes actively harm it. Unbounce’s guide to hiring CRO expertise covers this and several other evaluation criteria worth working through before you sign anything.
Finally, ask about their qualitative research process. The agencies that consistently produce the best results are the ones that invest heavily in understanding why users behave the way they do before they start designing tests. Quantitative data tells you where the problem is. Qualitative data tells you what the problem actually is. Hotjar’s breakdown of conversion funnel optimisation is a useful reference for the kinds of qualitative methods a serious programme should include.
What Should a CRO Service Cost?
Pricing in CRO is all over the place, and the variation doesn’t always correlate with quality. I’ve seen agencies charge £3,000 a month for what is essentially a monthly report and one A/B test. I’ve also seen boutique specialists charge £15,000 a month and deliver programmes that materially changed a client’s unit economics within two quarters. The price tells you almost nothing on its own.
What you should be pricing is the scope of the engagement, not the headline monthly number. A useful framework: work out how many hours of genuine analytical and strategic work you need per month, what seniority those people need to be, and what that costs at market rates. Then compare that to what you’re being quoted. If the numbers don’t reconcile, ask where the gap is. You’ll either find that the agency has found an efficiency you haven’t accounted for, or you’ll find that the delivery team is more junior than the sales team suggested.
Broadly, there are three engagement models in the market. Retainer-based programmes, where the agency runs your testing programme on an ongoing basis, typically range from £4,000 to £20,000 per month depending on scope and volume. Project-based engagements, usually an audit plus a defined number of tests, range from £8,000 to £40,000 for the project. Consultancy-led engagements, where an experienced practitioner builds your internal capability and oversees the programme without running it day-to-day, vary enormously but are often the most cost-effective option for businesses with existing analytical resource.
The question of whether to build in-house or buy externally is worth thinking through carefully. Unbounce’s piece on demonstrating CRO value is useful context here, particularly if you’re trying to build an internal business case for investment.
Which Parts of the Funnel Should a CRO Service Focus On?
This is a question I always pushed clients to answer before we scoped any engagement. The instinctive answer is usually “the checkout” or “the contact form”, because that’s where the conversion event happens. But that’s often not where the biggest opportunity sits.
Consider a typical e-commerce funnel. If your product page to add-to-cart rate is 4% and your cart to checkout completion rate is 60%, the maths on improving the product page is more compelling than improving the checkout, even though the checkout is closer to the conversion event. A 2 percentage point improvement in product page conversion (from 4% to 6%) is a 50% lift in the number of people entering your cart. A 5 percentage point improvement in checkout completion (from 60% to 65%) is a much smaller absolute gain. Crazy Egg’s e-commerce conversion funnel guide works through this logic in detail and is worth reading if you’re in retail or DTC.
For B2B businesses, the funnel analysis is different but the principle is the same. The question is not “where is the conversion event?” but “where is the biggest drop-off, and what is the commercial value of reducing it?” Sometimes that’s the homepage. Sometimes it’s a specific landing page receiving paid traffic. Sometimes it’s a form that’s too long or asks for information the user doesn’t trust you with yet.
A good CRO service will map your full funnel before recommending where to focus. If a provider jumps straight to “we’ll optimise your landing pages” without first understanding your funnel structure and where volume is being lost, that’s a red flag. They’re selling a solution before they understand the problem. Crazy Egg’s broader guide to website conversion funnels covers funnel mapping methodology if you want a framework to work through before your first agency conversation.
How Does Traffic Volume Affect What CRO Services Can Deliver?
This is the constraint that most CRO conversations skip over, and it causes a lot of disappointment. A/B testing requires statistical significance to produce reliable results. Statistical significance requires volume. If your site receives 5,000 sessions a month, you cannot run a valid A/B test on a page that 20% of those visitors see. The maths simply doesn’t work within any reasonable timeframe.
The rough rule of thumb is that you need a minimum of around 1,000 conversions per variant per test to get reliable results in a reasonable timeframe. For high-conversion pages like email sign-up forms, that threshold is reachable with moderate traffic. For low-conversion pages like demo request forms, you may need tens of thousands of sessions to get a clean result.
This doesn’t mean CRO is unavailable to lower-traffic businesses. It means the approach needs to change. For sites with limited traffic, the emphasis should shift toward qualitative research, expert UX review, and iterative design changes based on best practice rather than controlled experiments. These approaches don’t produce the statistical certainty of a well-powered A/B test, but they’re far more valuable than running underpowered tests and drawing false conclusions from them.
When I was building out our performance offering at iProspect, we had clients at very different traffic scales. The discipline we applied to both was the same: understand the user, form a clear hypothesis, implement a change, measure what happened. The difference was the confidence level we could assign to the result. Being honest about that distinction with clients was, I think, one of the reasons they trusted us. It’s easier to oversell certainty than to explain the limits of what the data can tell you.
What Does a Mature CRO Programme Look Like in Practice?
A mature CRO programme doesn’t look like a burst of activity followed by a report. It looks like a quarterly rhythm of research, hypothesis development, testing, and learning, with a backlog that’s always being refreshed and a set of documented insights that inform not just the testing programme but broader marketing decisions.
The teams running the best programmes I’ve seen share a few characteristics. They treat every test result, win or loss, as a learning. They maintain a hypothesis library that documents not just what was tested but why, what was expected, and what actually happened. They share findings across the business, so that insights from the CRO programme inform product decisions, customer service scripts, and paid media creative. And they measure the programme’s commercial impact at the revenue level, not just the conversion rate level.
That last point is worth expanding. Conversion rate is a useful operational metric, but it’s not the business outcome. A 15% improvement in conversion rate on a page that drives low-value leads is less valuable than a 5% improvement on a page that drives high-value ones. The CRO programme needs to be connected to revenue and margin data to be properly evaluated. If your CRO service provider is reporting purely on conversion rate and not on revenue impact, push them to go further.
Reducing bounce rate is often part of this picture too. Traffic that arrives and immediately leaves is traffic that never enters your funnel. Mailchimp’s guide to reducing bounce rate is a straightforward reference for the levers available at the top of the funnel, and it’s worth working through alongside any landing page or homepage optimisation work.
How Do CRO Services Interact With Paid Media?
The relationship between CRO and paid media is one of the most underexploited opportunities in performance marketing. Most businesses manage their paid media and their CRO programme in separate workstreams, sometimes with separate agencies. The result is that paid media optimises for click-through rate and CPC while CRO optimises for on-site conversion, and neither team has full visibility of the combined effect.
The more productive model is to treat CRO and paid media as a single system. The landing page is not a downstream consequence of the ad. It’s the second half of the same user experience. If your ad promises a specific outcome and your landing page doesn’t immediately deliver on that promise, you’ll pay for the click and lose the conversion. Message match between ad creative and landing page is one of the highest-leverage optimisations available, and it requires the paid media team and the CRO team to be working from the same brief.
I’ve managed hundreds of millions in ad spend across multiple industries, and the pattern holds consistently: the businesses that treat CRO as an integrated part of their paid media programme consistently outperform those that treat it as a separate workstream. The unit economics are simply better when every pound of traffic spend is being converted at the highest possible rate.
Click-through rate is part of this equation too. Understanding the relationship between your ad CTR and your post-click conversion rate helps you prioritise where to optimise first. Semrush’s breakdown of click rate versus click-through rate is a useful primer if your team conflates the two metrics, which happens more often than it should.
When Does It Make Sense to Bring CRO In-House?
The in-house versus agency question in CRO follows the same logic as in most marketing disciplines. If CRO is a core, ongoing function of your business, and you have the traffic volume to sustain a continuous testing programme, building internal capability makes sense. If CRO is periodic or project-based, or if you don’t yet have the analytical infrastructure to support it, an external provider will typically be more cost-effective and faster to deploy.
The hybrid model is often the most practical. An external agency or consultant runs the programme and provides the specialist expertise. An internal analyst or digital manager owns the relationship, manages the data infrastructure, and ensures the programme stays connected to business priorities. This structure keeps costs manageable while ensuring the programme has genuine internal ownership.
When I was scaling the agency, we actively encouraged clients to build internal capability alongside our work, not because we wanted to reduce our own revenue, but because programmes with strong internal ownership consistently outperformed those where the client was entirely dependent on us. Internal ownership means faster decision-making, better context, and a team that’s genuinely invested in the outcome rather than the deliverable.
There’s more on how to structure a CRO programme for sustainable commercial impact across the CRO and Testing hub, including frameworks for audit design, testing methodology, and commercial measurement.
What Should You Expect in the First 90 Days of a CRO Engagement?
The first 90 days of any CRO engagement should be heavily weighted toward research and diagnosis. If your provider is running live tests in week three, something is wrong. You cannot form good hypotheses without understanding your users, your funnel, and your data quality. Rushing to testing is the single most common mistake in CRO engagements, and it produces a lot of activity with very little signal.
A reasonable 90-day plan looks something like this. In the first month: analytics audit, tracking validation, funnel mapping, and qualitative research setup (session recordings, on-page surveys, and where budget allows, user interviews). In the second month: synthesis of research findings, hypothesis development, prioritisation of the test backlog, and design of the first round of tests. In the third month: first tests live, with enough time to accumulate meaningful data before the 90-day review.
By the end of 90 days, you should have a clear picture of where your funnel is losing value, a prioritised backlog of hypotheses with commercial rationale behind each one, and at least one test that has reached statistical significance. You probably won’t have transformed your conversion rate yet. Anyone who promises you will in that timeframe is overselling. What you should have is a programme that’s properly grounded in data and positioned to compound over the following 12 months.
The Effie Awards process taught me something relevant here. The campaigns that win on effectiveness are almost never the ones that did something dramatic in a short window. They’re the ones that built a systematic understanding of their audience and compounded that understanding over time. CRO is no different. The businesses that treat it as a sprint get sprint results. The ones that treat it as a discipline get compounding returns.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
