CRO Audit: What to Measure Before You Start Testing

A CRO audit is a structured review of your website or funnel that identifies where and why visitors are not converting, before any testing begins. It combines quantitative data, behavioural analysis, and qualitative research to produce a prioritised list of conversion problems worth solving.

Done properly, it tells you what to test and in what order. Done badly, or skipped entirely, it turns your testing programme into an expensive lottery.

Key Takeaways

  • A CRO audit is diagnostic work, not optimisation work. The testing comes after, not during.
  • Most conversion problems are visible in your existing data. The audit is the process of reading that data honestly.
  • Traffic quality and source mix distort conversion rates more than most site-side issues. Audit both together.
  • Behavioural data like heatmaps and session recordings reveal friction that analytics alone cannot explain.
  • The output of a CRO audit should be a prioritised hypothesis list, not a redesign brief.

The problem with most CRO programmes is not a lack of testing. It is a lack of diagnosis before testing starts. I have seen teams run fifty A/B tests in a year and move the needle on almost nothing, because they were testing solutions to problems they had never properly defined. The audit is where you define them.

What Does a CRO Audit Actually Cover?

A CRO audit is not a single spreadsheet or a checklist you run through in an afternoon. It is a layered investigation across three distinct areas: your analytics data, your on-site behavioural signals, and the voice of your customer. Each layer answers a different question. Analytics tells you where conversion is breaking down. Behavioural data tells you how visitors are actually moving through your site. Customer research tells you why they are or are not converting.

Most audits I have reviewed focus heavily on the first layer and skip the other two almost entirely. That produces a list of underperforming pages, which is useful, but not a list of conversion problems with enough diagnostic depth to act on confidently.

If you want to understand the broader discipline this sits within, the conversion optimisation hub covers the full scope of CRO, from testing methodology to funnel analysis to commercial measurement.

How Do You Structure the Analytics Layer of a CRO Audit?

Start with your funnel. Not the funnel you think you have, but the funnel your data is actually showing you. Map every step a visitor takes from first landing to conversion, and measure the drop-off at each stage. What you are looking for is not just where volume is lost, but where it is lost disproportionately relative to what you would expect.

Segment that funnel data before you draw any conclusions. Conversion rates aggregated across all traffic sources are almost always misleading. Paid social traffic converts differently from organic search. Mobile converts differently from desktop. New visitors convert differently from returning ones. When you blend all of that together and read a single conversion rate, you are averaging out the signal. The problems, and the opportunities, are in the segments.

Traffic quality is one of the most underexamined variables in a CRO audit. If a significant portion of your paid traffic is low-intent or poorly targeted, no amount of on-site optimisation will fix your conversion rate. Unbounce has written well about how poor traffic quality undermines CRO efforts, and it is a point worth taking seriously before you spend weeks optimising landing pages for an audience that was never going to convert.

Beyond funnel drop-off, look at page-level performance. Time on page, scroll depth, and exit rate by page type give you a rough sense of where engagement is collapsing. These are not precise diagnostics on their own, but they point you toward the pages worth investigating further in the behavioural layer.

What Does the Behavioural Layer of a CRO Audit Reveal?

Analytics tells you that visitors are leaving a page. Behavioural data starts to tell you what they were doing before they left. This is where tools like heatmaps and session recordings earn their place in the audit process.

Heatmaps showing scroll, move, and click behaviour are particularly useful for identifying friction on high-traffic pages. A click heatmap might show you that visitors are clicking on an image or a piece of text that is not a link, suggesting they expected it to be interactive. A scroll map might show you that the majority of visitors are not reaching your primary call to action, because it sits below a section of content that is killing momentum.

Session recordings are slower to analyse but often more revealing. Watching real users move through a checkout flow or a lead generation form shows you hesitation patterns, form abandonment behaviour, and navigation confusion that no metric can capture directly. I have reviewed session recordings for clients who were convinced their checkout was clean, only to find a consistent pattern of users pausing at a specific field, backing out, and leaving. The field label was ambiguous. A two-word copy change fixed a problem that had been invisible in the analytics for months.

Bounce rate analysis belongs in this layer too, though it requires careful interpretation. A high bounce rate on a blog post is often irrelevant. A high bounce rate on a product page or a paid landing page is a signal worth investigating. Moz’s explanation of bounce rate is a useful reference for understanding what the metric does and does not tell you before you build conclusions around it.

How Does Customer Research Fit Into a CRO Audit?

This is the layer most teams skip, and it is often the one that produces the most actionable insight. Quantitative data tells you that something is broken. Customer research tells you why.

Exit surveys, on-site polls, and post-purchase surveys are the most practical tools here. A single well-placed exit survey asking visitors why they did not complete a purchase will surface objections, confusion points, and missing information that you would never find by looking at analytics alone. The answers are often uncomfortable, which is probably why teams avoid asking the question.

Customer interviews and sales team debriefs are worth including if the conversion you are optimising involves any degree of considered decision-making. The objections that come up repeatedly in sales conversations are almost always the same objections that are killing conversion on your website. The difference is that your sales team handles them in real time, and your website does not.

Review mining is another technique that tends to get overlooked. Reading through your own reviews and your competitors’ reviews, particularly the negative ones, surfaces the language customers use to describe their problems and the standards they are measuring you against. That language belongs in your copy, your headlines, and your objection-handling content.

What Are the Most Common Findings in a CRO Audit?

After running and reviewing audits across dozens of businesses in different sectors, certain patterns appear with enough regularity to be worth naming. Not because they are universal, but because they are common enough that if you have not looked for them, you probably should.

The first is a mismatch between ad messaging and landing page content. A visitor clicks an ad that promises a specific benefit, arrives on a page that talks about something adjacent, and leaves because the connection was not made clearly enough. This is a straightforward problem to identify by auditing your top paid traffic landing pages against the ads driving traffic to them. The relationship between traffic intent and on-page relevance matters more than most teams acknowledge when they are building campaigns.

The second is form friction. Forms that ask for more information than the conversion step requires, forms with unclear field labels, forms that do not explain why certain information is needed. These are fixable problems that rarely require a test to validate. They require someone to look at the form from the perspective of a first-time visitor with no context and no existing trust in the brand.

The third is trust signal gaps. Particularly on ecommerce sites, visitors making a first purchase are running a rapid, largely unconscious risk assessment. They are looking for signals that the business is legitimate, that the transaction is secure, and that they can get their money back if something goes wrong. Hotjar’s work on Shopify conversion optimisation covers this well in the ecommerce context, and the principles apply broadly. If your audit finds that visitors are spending time near the checkout but not completing, missing or weak trust signals are one of the first things to investigate.

The fourth, and probably the most structurally important, is that the conversion problem is not on the page people think it is. Teams spend months optimising checkout pages when the real problem is the product page. They redesign the homepage when the issue is in the email sequence driving traffic to it. The audit is supposed to establish where the problem actually lives, not confirm the assumption the team arrived with.

How Should You Prioritise the Output of a CRO Audit?

The output of a CRO audit should be a prioritised list of hypotheses, not a list of things to fix. The distinction matters. A fix implies certainty. A hypothesis implies that you have a well-reasoned belief about what is causing a problem and what change might address it, and that you are going to test it.

Prioritisation frameworks vary, but the most useful ones weigh three things: the potential impact of the change if the hypothesis is correct, the confidence you have in the hypothesis based on the evidence behind it, and the effort required to test it. A high-impact, high-confidence, low-effort test should go to the top of the list. A low-impact, low-confidence, high-effort test belongs at the bottom, or off the list entirely.

One thing I have learned from running programmes where the commercial stakes were high is that teams tend to over-index on effort as a reason to defer tests. The question is not whether a test is easy to build. The question is whether solving the problem it addresses would materially affect revenue. If the answer is yes, effort is a logistics problem, not a reason to deprioritise.

When I was building out the performance practice at a previous agency, we introduced a simple rule: every hypothesis on the test backlog had to have a commercial value estimate attached to it. Not a precise number, but a rough order of magnitude. What would a 10% improvement in conversion on this page be worth in revenue terms? That single discipline changed how the team thought about prioritisation, and it changed how clients engaged with the programme. It moved the conversation from “what are we testing this month” to “what is the most valuable problem we can solve next.”

What Technical Elements Should a CRO Audit Include?

Technical issues are often treated as a separate workstream from CRO, but they belong in the audit because they directly affect conversion. Page speed is the most obvious example. A landing page that takes four seconds to load on mobile is a conversion problem before it is a technical problem.

The technical audit should cover page load performance across device types, mobile usability, form functionality and error handling, and tracking accuracy. That last one is more important than most people realise. If your conversion tracking is misconfigured, your analytics data is wrong, and every decision you make on the basis of that data is built on a flawed foundation.

I walked into a client engagement once where the team was convinced their checkout conversion rate was around 2%, which they accepted as an industry norm. When we audited the tracking, we found that a significant number of confirmed purchases were not being recorded as conversions because of a tag firing error. The actual rate was higher, which changed both the diagnosis and the priority list. Measurement integrity is not a footnote in a CRO audit. It is a prerequisite.

The core principles of conversion rate optimisation outlined by Search Engine Land make a similar point about measurement as a foundation, not an afterthought. It is a principle that has held up for years because it reflects how the work actually operates.

How Often Should You Run a CRO Audit?

A full CRO audit is not a quarterly exercise. It is a significant piece of diagnostic work that typically takes two to four weeks to do properly, depending on the complexity of the site and the volume of traffic available for analysis. Running it too frequently is a sign that the first one did not produce a clear enough action list to keep the programme moving.

A more practical model is a full audit at the start of a programme, followed by lighter-touch reviews at regular intervals, perhaps every six months, to assess whether the original hypotheses have been addressed, what new data has emerged, and whether the priority list needs updating. Major site changes, new traffic sources, or significant shifts in conversion rate should all trigger a targeted review even if a scheduled audit is not due.

For ecommerce businesses, the principles of ecommerce conversion optimisation from Mailchimp are worth reviewing as a reference point for what a recurring audit process should be monitoring. The fundamentals are consistent across platforms, even if the implementation details vary.

The audit is not the programme. It informs the programme. The distinction matters because teams sometimes confuse the diagnostic work with the optimisation work and end up in a cycle of auditing without ever committing to a sustained testing cadence. The audit tells you what to test. The programme is where the value is actually created.

What Makes a CRO Audit Worth the Investment?

This is the question that does not get asked often enough, probably because the people running the audit have a vested interest in the answer. But it is a legitimate commercial question, and the honest answer is: it depends on what you do with it.

An audit that produces a 40-page report which gets reviewed once and filed is not worth the investment. An audit that produces a prioritised hypothesis list that drives a testing programme for the next twelve months, and that testing programme improves conversion by a measurable amount on pages with meaningful traffic volume, is worth considerably more than it cost.

The audit creates value only when it changes behaviour. That means it needs to produce recommendations that are specific enough to act on, prioritised clearly enough that the team knows where to start, and framed commercially enough that stakeholders understand why they should care.

I have sat in enough board meetings to know that “we improved the UX of the checkout page” does not land the same way as “we identified a friction point in the checkout flow that was costing approximately X in monthly revenue, and we have a test scheduled to address it.” The second version is what a good audit makes possible. It connects the diagnostic work to the commercial outcome, which is the only reason the audit should exist.

If you are building or reviewing a conversion optimisation programme and want a broader framework for thinking about the discipline, the conversion optimisation section of The Marketing Juice covers the full range of topics, from audit methodology to testing frameworks to measurement.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is a CRO audit?
A CRO audit is a structured diagnostic review of a website or funnel that identifies where and why visitors are not converting. It typically combines analytics data, behavioural signals such as heatmaps and session recordings, and qualitative customer research to produce a prioritised list of conversion problems worth testing.
How long does a CRO audit take?
A thorough CRO audit typically takes two to four weeks, depending on the size and complexity of the site, the volume of traffic available for analysis, and how many data sources are being reviewed. Lighter-touch audits focused on a specific funnel stage or page type can be completed more quickly, but tend to produce narrower findings.
What tools are used in a CRO audit?
The most common tools include a web analytics platform for funnel and page-level data, a behavioural analytics tool for heatmaps and session recordings, and a survey or polling tool for on-site customer research. Tracking audit tools are also useful for verifying that conversion events are being recorded accurately before drawing conclusions from the data.
What is the difference between a CRO audit and A/B testing?
A CRO audit is diagnostic work that identifies conversion problems and produces hypotheses about what is causing them. A/B testing is the method used to validate those hypotheses by running controlled experiments. The audit should come first and inform which tests are worth running. Running tests without an audit often means testing solutions to problems that have not been properly defined.
How often should you run a CRO audit?
A full CRO audit is typically conducted at the start of an optimisation programme, with lighter reviews every six months or so to update the hypothesis backlog. Targeted reviews should also be triggered by significant site changes, new traffic sources, or unexplained shifts in conversion rate. Running a full audit too frequently usually means the previous one did not produce a clear enough action list.

Similar Posts