CRO Blog: What to Read, Test, and Think About

A CRO blog should do one thing well: help you make better decisions about your conversion programme. Not sell you a methodology, not celebrate testing for its own sake, and not pretend that every A/B test is a breakthrough. The best conversion content is commercially grounded, sceptical where scepticism is warranted, and honest about what the data actually tells you.

This is a running resource for marketers who want to think more clearly about conversion optimisation, not just do more of it.

Key Takeaways

  • Most CRO content teaches you what to test, not how to decide what’s worth testing in the first place.
  • A testing programme without a clear commercial hypothesis is just organised activity, not optimisation.
  • Behavioural data tools like heatmaps and session recordings are diagnostic, not prescriptive. They show you what happened, not why.
  • The highest-value CRO work happens before a test is designed, in the research and prioritisation phase.
  • CRO is a thinking discipline first. The tools and frameworks are only as good as the judgment applied to them.

Why Most CRO Content Misses the Point

I’ve spent a long time around performance marketing, and one thing I’ve noticed is that CRO content tends to cluster around tactics. Button colours. Headline formulas. Form field counts. Social proof placement. These things matter at the margin, but they’re rarely what separates a conversion programme that drives commercial growth from one that generates interesting test results and not much else.

The harder questions, the ones most CRO blogs avoid, are about thinking. What are you actually trying to optimise for? How do you know your test is measuring the right thing? What happens when your winning variant lifts conversions but tanks retention? These are business questions dressed in CRO clothing, and most content in this space isn’t equipped to answer them.

When I was running agencies, I watched teams run hundreds of tests across client accounts. The ones that generated real commercial value had something in common: they started with a sharp hypothesis rooted in actual user behaviour, not a list of “best practices” someone had found in a case study. The ones that generated noise started with the test itself, not the question the test was meant to answer.

If you want to go deeper on the fundamentals before continuing, the conversion optimisation hub covers the full landscape, from audit frameworks to testing strategy to commercial measurement.

What Good CRO Reading Actually Looks Like

There’s no shortage of CRO content online. The problem is the signal-to-noise ratio. A lot of what gets published is case study theatre: a company ran a test, the test won, consider this you should do. The conclusion is almost always “test this thing.” What’s missing is the reasoning chain that got them to that test in the first place, and the honest account of what happened three months later.

Good CRO reading does a few things. It explains the diagnostic process, not just the intervention. It’s honest about statistical limitations and sample sizes. It distinguishes between a test that won in the short term and a change that improved the business. And it’s clear about context: what worked for a high-volume ecommerce site may be completely irrelevant to a B2B SaaS product with a 60-day sales cycle.

Unbounce has published some genuinely useful material on what conversion optimisation actually involves in practice, including the uncomfortable truth that most teams are under-resourced for the testing cadence they claim to run. That’s worth reading, not because it tells you what to test, but because it’s honest about the operational reality of CRO work.

Moz has also done useful work on common CRO misconceptions, particularly around what conversion rate actually measures and why optimising for it in isolation can mislead you. If you haven’t watched that, it’s a good calibration exercise.

The Diagnostic Layer Most Teams Skip

Before you run a test, you need to understand what’s actually happening on your site. Not what you think is happening. Not what the aggregate conversion rate suggests. What users are doing, where they’re stopping, and what the gap is between their intent and your page’s ability to serve it.

This is where behavioural data tools earn their place. Heatmaps, scroll maps, and session recordings are not a substitute for quantitative analysis, but they add a layer of texture that click-through rates and bounce rates can’t give you. They show you friction in a way that numbers alone don’t.

Hotjar’s documentation on scroll, move, and click heatmaps is a reasonable primer on how to interpret this kind of data. The important thing to remember is that these tools are diagnostic, not prescriptive. A heatmap showing that users aren’t scrolling past the fold doesn’t tell you why. It tells you where to start asking questions.

I’ve seen teams get very excited about heatmap data and then immediately jump to a redesign. The scroll map showed users weren’t reaching the CTA, so the CTA moved above the fold. Test ran. Conversion rate went up slightly. Everyone was pleased. Nobody asked whether the users who were reaching the original CTA were actually higher-intent, and whether moving it up was attracting lower-quality conversions. That question matters a great deal if you’re paying for traffic.

Bounce rate is another metric that gets misread constantly. Moz’s breakdown of what bounce rate actually means is worth bookmarking, because it’s one of those metrics that looks simple and isn’t. A high bounce rate on a blog post might mean the content answered the question perfectly. A high bounce rate on a product page is a different problem entirely. Context is everything.

Landing Pages: Where CRO Gets Practical

Landing pages are the sharpest test of CRO thinking because they have one job. There’s no navigation to hide behind, no content strategy to blame, no UX complexity to point at. A landing page either converts or it doesn’t, and the reasons are usually findable if you’re willing to look honestly.

Hotjar’s overview of landing page conversion optimisation covers the core diagnostic questions well: message match, load speed, form friction, trust signals, and value proposition clarity. These aren’t new ideas, but the discipline of checking them systematically, rather than assuming they’re fine, is where most teams fall short.

Message match is the one I see fail most often. An ad makes a specific promise. The landing page delivers a generic product pitch. The user arrived with a specific intent, the page addressed a broader one, and the conversion didn’t happen. This isn’t a design problem. It’s a thinking problem. The team that built the page and the team that wrote the ad weren’t talking to each other, or they were, and nobody was asking whether the user experience was coherent end-to-end.

Copyblogger published an interesting piece years ago on landing page entries judged by live multivariate testing, which is useful not because the specific results are transferable (they’re not) but because it illustrates how differently expert opinion and actual user behaviour can diverge. What experts think will win and what users actually respond to are often not the same thing. That’s the whole argument for testing over assumption.

Ecommerce CRO: Volume Changes the Calculus

Ecommerce is where CRO gets the most attention, partly because the feedback loops are faster and partly because the commercial impact of conversion rate improvements is more immediately visible. A 0.5% lift in conversion rate on a site doing significant volume is real money. That makes it easier to justify investment, and easier to measure whether the investment is working.

Mailchimp’s resource on ecommerce conversion rate optimisation is a reasonable starting point for teams new to this space. It covers the standard levers: product page clarity, checkout friction, cart abandonment, and trust signals. None of it is surprising, but the discipline of addressing each systematically is where most ecommerce teams underinvest.

The more interesting ecommerce CRO challenge is the one that doesn’t get written about as much: what happens when you optimise for conversion rate but not for customer quality? I’ve worked with ecommerce clients who ran aggressive promotional tests, drove conversion rates up meaningfully, and then watched average order value and repeat purchase rates decline. The test won. The business didn’t. That’s a CRO programme that’s optimising for the wrong thing, and it’s more common than anyone in the industry likes to admit.

Volume also changes your testing options. A high-traffic ecommerce site can run a clean A/B test to statistical significance in days. A lower-volume B2B site might need weeks or months for the same test, and by then the market context may have shifted. This is why testing cadence needs to be calibrated to your actual traffic, not to the cadence you’d like to run. Ambition is fine. Underpowered tests are not.

How to Think About CRO Hiring

One of the more practical questions in CRO is who should be doing it. In-house specialist, agency, freelancer, or a hybrid? The answer depends on your volume, your testing maturity, and whether you have the internal capability to act on recommendations quickly.

Unbounce has published a useful piece on how to hire for CRO, drawing on input from practitioners across the industry. The consensus is roughly what you’d expect: look for analytical rigour, hypothesis-driven thinking, and someone who understands that a test result is the beginning of an insight, not the end of one.

My view, based on running agencies and building performance teams, is that CRO is one of the disciplines where thinking quality matters more than tool familiarity. The tools are learnable. The ability to form a sharp hypothesis, design a clean test, and interpret results without confirmation bias is harder to develop and much harder to hire for. When I was building out performance teams, I’d always rather have someone who could think clearly about a problem and was new to the tools than someone who knew every feature of every platform but defaulted to best-practice checklists.

The agency model for CRO has its own complications. Agencies are incentivised to run tests, not necessarily to ask whether testing is the right intervention at this stage of your programme. If your site has fundamental structural problems, no amount of A/B testing will fix them. You need someone, internal or external, who is willing to say that, even when it’s not what the client wants to hear.

The Thinking Behind the Testing

Process is useful. I’ve built enough processes across enough agencies to know that structure matters, especially when you’re managing multiple clients, multiple programmes, and teams at different levels of seniority. A good testing process means tests get documented, results get recorded, and learnings accumulate rather than evaporating when someone leaves the team.

But process should never replace thinking. I’ve seen CRO programmes that were beautifully structured and commercially useless. They had prioritisation frameworks, hypothesis templates, testing calendars, and results dashboards. They also had no clear answer to the question: what are we actually trying to learn, and how will this test help us learn it?

The best CRO practitioners I’ve worked with treat each test as a question, not an intervention. They’re not trying to win a test. They’re trying to understand something about their users that they didn’t understand before. When a test loses, that’s information. When a test wins but the team can’t explain why, that’s a problem, because unexplained wins don’t generalise and they don’t compound.

This is the discipline that separates a CRO programme from a testing programme. Testing is a tactic. CRO, done properly, is a systematic effort to understand what creates value for your users and your business, and to use that understanding to make better decisions over time. The testing is just the mechanism for generating evidence.

There’s more on building that kind of programme, and on the commercial frameworks that make CRO work worth doing, across the conversion optimisation section of The Marketing Juice. If you’re building or rebuilding a CRO capability, that’s a reasonable place to orient yourself.

What This Blog Covers and Why

The CRO content on The Marketing Juice is written for marketers who are past the introductory stage. You know what A/B testing is. You’ve probably run some tests. You may be managing a programme or evaluating whether to invest in one. What you’re looking for is sharper thinking, not another explainer on statistical significance.

The topics covered here reflect the questions I’ve seen come up repeatedly across client conversations, agency work, and the kind of honest post-mortems that happen when a programme isn’t delivering what it promised. Why do so many CRO programmes underdeliver? How do you connect CRO work to commercial outcomes rather than just conversion metrics? What does a genuinely useful audit look like? How do you build a testing programme that generates insight rather than just results?

These aren’t abstract questions. They’re the ones that determine whether a CRO investment pays back or becomes an expensive exercise in organised activity. I’ve been on both sides of that equation, and the difference is almost always in the quality of thinking at the start, not the quality of the tools used to execute.

If there’s a topic you’d like covered that isn’t here yet, the thinking behind this blog is simple: if it’s a question a serious marketer would ask, and if the honest answer requires more than a checklist, it belongs here.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What should a CRO blog cover to be genuinely useful?
A useful CRO blog goes beyond tactic lists and addresses the thinking behind testing: how to form a hypothesis, how to prioritise what to test, how to interpret results honestly, and how to connect conversion work to commercial outcomes. Tactical content has its place, but the most valuable material helps you make better decisions, not just run more tests.
How do you know if your CRO programme is generating real value or just activity?
The clearest signal is whether your testing programme is producing learnings that inform future decisions, not just winning variants that get implemented. If your team can explain why a test won and apply that understanding elsewhere, the programme is generating insight. If tests win or lose without a clear explanation, you’re accumulating results without building knowledge.
What is the difference between a testing programme and a CRO programme?
A testing programme runs experiments. A CRO programme uses experiments to systematically understand what creates value for users and the business, and applies that understanding to make better decisions over time. The distinction matters because a testing programme can be busy and commercially useless at the same time. CRO, done properly, is a thinking discipline that uses testing as one of several tools.
How should ecommerce teams approach conversion optimisation differently from B2B teams?
Ecommerce teams typically have the traffic volume to run clean tests quickly and can measure impact directly in revenue terms. The risk is optimising for conversion rate without accounting for customer quality, average order value, or repeat purchase behaviour. B2B teams face the opposite challenge: lower traffic volumes mean longer test cycles and a greater reliance on qualitative research to inform hypotheses before testing begins.
What behavioural data tools are most useful for CRO diagnostics?
Heatmaps, scroll maps, and session recordings are the most commonly used behavioural tools in CRO. They’re useful for identifying friction points and understanding where users stop engaging, but they’re diagnostic rather than prescriptive. They show you what happened on a page, not why it happened. The most effective use of these tools is as a starting point for forming hypotheses, not as a direct input to design decisions.

Similar Posts