The LockPickingLawyer Phrase That Every Analyst Should Know

The LockPickingLawyer is a YouTube channel where a man opens locks. That is the entire premise. And in almost every video, at some point, he says the same thing: “Nothing on one, nothing on two, slight click on three, nothing on four.” He is narrating what he is feeling, in real time, so that viewers understand what the data means, not just what it is. That phrase, repeated across hundreds of videos, is one of the clearest models I have seen for how analysts should communicate findings to marketing teams.

Most marketing analytics fails not because the data is wrong, but because the narration is missing. Numbers get reported. Dashboards get built. Metrics get presented. But nobody explains what is clicking on three and what is producing nothing on four, and so the people who need to make decisions are left guessing.

Key Takeaways

  • Raw metrics without narration are not insights. They are unread signals waiting for someone to interpret them.
  • The LockPickingLawyer model works because it combines real-time observation with plain-language explanation. That is exactly what good analytics communication looks like.
  • Most marketing dashboards report what happened. Fewer explain why. Almost none suggest what to do differently.
  • GA4 and similar tools surface data points. Turning those data points into decisions requires a human layer that most teams underinvest in.
  • The analyst who can narrate findings clearly is worth more to a business than the analyst who can build the most complex attribution model nobody reads.

What Does a Lock Picker Have to Do With Marketing Analytics?

Bear with me here, because this is not a metaphor for its own sake.

When I was running an agency and we were pitching a new client, one of the first things I would do is pull their GA data, or whatever analytics platform they were on, and look at what was actually being used. Not what was being tracked. Not what was in the dashboard. What was being acted on. Nine times out of ten, the answer was: almost nothing. There were reports. There were exports. There were monthly decks with charts. But when I asked the marketing director what they had changed in the last quarter based on something they saw in the data, the room went quiet.

The data was there. The narration was not.

The LockPickingLawyer does something deceptively simple. He tells you what each piece of feedback means as he receives it. “Nothing on one” means that pin is not set, not binding, not useful right now. “Slight click on three” means something is happening there, something worth attention. He does not present you with a spreadsheet of pin positions at the end. He walks you through the process in a way that makes the data legible in real time.

That is the gap in most marketing analytics setups. The data is collected. The events are tracked. The conversions are logged. But nobody is standing next to the decision-maker saying “nothing on paid social this week, slight click on organic search, strong signal on email to lapsed customers.” If you are interested in how analytics fits into a broader performance marketing picture, the Marketing Analytics hub covers that territory in depth.

Why Most Dashboards Are Noise Dressed Up as Signal

I have seen dashboards with 47 metrics on them. I have built some of them myself, in earlier parts of my career, when I thought comprehensiveness was a virtue. It is not. Comprehensiveness is a way of avoiding the hard question, which is: what does this actually mean for what we do next?

There is a version of analytics theatre that is extremely common in mid-size businesses and agencies alike. It goes like this. Someone sets up GA4, or a paid tool, and connects it to a Looker Studio or a Tableau dashboard. The dashboard gets shared with the client or the leadership team. Everyone nods. The dashboard becomes a fixture in the monthly reporting meeting. And then, slowly, it becomes wallpaper. People stop reading it carefully. They scan for the numbers that are obviously bad and sigh, or they look for the numbers that are obviously good and feel briefly reassured. The middle, which is where most of the actual insight lives, gets ignored.

This is not a technology problem. GA4 is a capable tool. The issue is that technology surfaces data. It does not interpret it. And interpretation requires someone who understands both the data and the business context well enough to say: this number matters, this one does not, and here is why.

When I was at iProspect and we were scaling the team from around 20 people to closer to 100, one of the things I pushed hard on was not just hiring people who could pull reports, but hiring people who could explain what the reports meant in plain English to a client who had a business to run. That is a rarer skill than it sounds. Most analysts are trained to be thorough. Fewer are trained to be clear.

The Three Layers of Analytics Communication Most Teams Skip

If you take the LockPickingLawyer model seriously, it suggests that useful analytics communication has three layers, and most teams only do one of them.

The first layer is observation. This is what happened. Sessions went up. Conversions went down. Cost per acquisition moved. This is the layer that almost every analytics setup handles reasonably well. The numbers exist. They can be read.

The second layer is interpretation. This is what it means. Sessions went up because of a spike in branded search following a PR mention. Conversions went down because a landing page change broke the mobile checkout flow. Cost per acquisition moved because a competitor entered the auction. This layer requires context. It requires someone who knows what else was happening in the business during the period in question. It requires joining dots that a dashboard cannot join automatically.

The third layer is implication. This is what we should do differently. If branded search spiked after a PR mention, what does that tell us about the value of earned media relative to paid? If a landing page change broke mobile checkout, what does that tell us about our QA process? If a competitor entered the auction and drove up CPAs, what does that tell us about our reliance on a single channel?

Most reporting stops at layer one. Some gets to layer two. Very few reach layer three consistently. And layer three is the only one that changes anything.

The reason I find the LockPickingLawyer framing useful is that he does all three, simultaneously, in real time. “Nothing on one” is observation. “That pin is not binding” is interpretation. “So I am going to move on and come back to it” is implication. He does this so naturally that most viewers do not notice they are receiving three layers of analysis at once. That is what good narration looks like.

How GA4 Makes This Problem Worse Before It Makes It Better

GA4 is a more powerful tool than Universal Analytics was. It is also, for many users, a more confusing one. The event-based model is more flexible. The exploration reports are genuinely useful once you know how to use them. The integration with Google Ads and BigQuery opens up analytical possibilities that were harder to access before.

But flexibility without structure is just complexity. And complexity, in the hands of a team that has not built the narration layer, produces more noise, not more signal.

One of the most common issues I see in GA4 setups is duplicate conversion tracking. Events get created multiple times, or the same action gets counted as a conversion in both GA4 and Google Ads, and suddenly the numbers do not add up and nobody can explain why. Moz has a solid breakdown of how duplicate conversions happen in GA4 and how to catch them before they distort your reporting. That kind of data hygiene issue is exactly the sort of thing that makes the narration layer harder. If you cannot trust the numbers, you cannot interpret them confidently.

There is also the question of what to do with GA4 data once you have it. For teams handling serious data volumes, exporting GA4 data to BigQuery is worth understanding. It removes the sampling limitations that can distort analysis at scale and gives analysts more control over how they cut the data. But again, the tool is only as useful as the person operating it and the process around them.

I spent time early in my career teaching myself to code because I could not get budget to hire someone who already knew how. That experience taught me something I still believe: the constraint forces clarity. When you have to build something yourself from scratch, you think very carefully about what it actually needs to do. Most analytics setups suffer from the opposite problem. The tool is already there, already tracking, already producing data, and nobody has asked the foundational question: what decision are we trying to make, and what data do we need to make it?

What Good Narration Actually Looks Like in Practice

I want to be concrete about this, because “narrate your data” is the kind of advice that sounds obvious and then gets ignored because nobody explains what it means in practice.

Good narration starts before the report is written. It starts with a question. Not “what happened last month?” but “did the campaign we ran in the second week of the month change anything, and if so, what?” That question shapes what you look at, what you ignore, and what you highlight. It is the difference between an analyst who pulls everything and an analyst who pulls what matters.

Early in my agency career, I ran a paid search campaign for a music festival at lastminute.com. It was a relatively simple campaign by today’s standards. But within roughly a day, we were looking at six figures of revenue. The reason I remember it clearly is not the number. It is that we were watching it in real time and narrating it to each other. “That keyword is converting at twice the rate of this one. Move budget.” “That ad copy is outperforming the control. Pause the control.” That real-time narration, between people who understood both the data and the business objective, is what made the campaign work. Nobody waited for a monthly report.

Good narration also means being explicit about what the data cannot tell you. If sessions went up but you cannot attribute it to a specific source because your UTM tagging broke, say that. If conversions look strong but you know there is a duplicate tracking issue that inflates the number, flag it. Honest approximation is more useful than false precision, and the teams I have worked with that built trust with their clients were almost always the ones who were willing to say “we think this is what happened, and here is why we are not completely certain.”

For teams thinking about what data-driven marketing actually requires to function, the narration layer is not optional. It is the part that turns data into decisions. Without it, you have a very expensive observation system.

The Metrics That Get Reported Versus the Metrics That Get Used

There is a gap in most marketing teams between the metrics that appear in reports and the metrics that actually influence decisions. It is worth being honest about this gap, because closing it is where most of the value in analytics sits.

Reported metrics tend to be the ones that are easy to pull and easy to present. Sessions, pageviews, click-through rates, impressions. These are fine as context. They are not fine as the primary basis for strategic decisions.

The metrics that actually get used, in the teams I have seen operate well, tend to be more specific and more tied to a business outcome. Not “traffic” but “traffic from the segment most likely to convert.” Not “email open rate” but “revenue per email sent to lapsed customers.” Not “cost per click” but “cost per acquisition by channel and by product category.” Understanding which marketing metrics connect to revenue rather than just activity is a prerequisite for useful reporting.

The LockPickingLawyer does not narrate every pin in the lock with equal weight. He focuses on the ones that are doing something. “Slight click on three” gets attention because it is a signal. “Nothing on one” gets acknowledged and moved past. That triage is what good analysts do. They know which numbers are signal and which are noise, and they spend their narration time on the signal.

For content-focused teams, Buffer’s breakdown of content marketing metrics is worth reading not because it tells you which metrics to use, but because it forces the question of what each metric is actually measuring and whether that thing connects to a business outcome you care about.

I have judged the Effie Awards, which are specifically about marketing effectiveness, and one of the things that stands out when you read the entries is how clearly the best campaigns connect activity to outcome. They do not report on reach and then separately report on sales. They show the link. They narrate the connection. That is the standard worth aiming for in day-to-day analytics, even if the audience is a weekly internal meeting rather than an awards panel.

Why Keyword Data Deserves More Narration Than It Gets

One area where the narration gap is particularly costly is keyword analysis. GA4 famously obscures keyword data for organic search, which means teams have to supplement it with Search Console or third-party tools to get a complete picture. But even when the data is available, it tends to get reported in a way that strips out most of the useful context.

A keyword report that shows you which terms are driving traffic is a layer-one report. A keyword report that shows you which terms are driving traffic from people who are likely to convert, segmented by intent, and compared against what competitors are ranking for, is getting toward layer two. A keyword report that tells you which gaps represent the highest-value opportunity given your current domain authority and content resources is layer three.

Understanding how to get keyword data out of Google Analytics is a practical starting point, but the narration work happens after the data is pulled, not during the extraction. The extraction is just observation. The interpretation and implication layers require someone who understands the business well enough to know what a keyword signal actually means for the content and commercial strategy.

If you want to go deeper on how analytics fits into performance marketing more broadly, the Marketing Analytics section of The Marketing Juice covers measurement frameworks, GA4 specifics, and the commercial context that makes data useful rather than decorative.

The Analyst Who Narrates Is the One Who Gets Listened To

There is a career observation buried in all of this that is worth making explicit. The analysts and marketing managers who get listened to in business settings are almost never the ones with the most sophisticated models. They are the ones who can explain what the data means in plain English, connect it to a decision, and be honest about what they do not know.

I have sat in hundreds of client meetings over two decades. The presentations that changed things were not the ones with the most data. They were the ones where someone said, clearly and specifically: “This channel is producing nothing. This one has a slight click. Here is what I think we should do about it, and here is what I am not certain about.”

That is the LockPickingLawyer phrase applied to marketing. It is not glamorous. It does not require a proprietary framework or a new technology stack. It requires the discipline to be clear, the confidence to interpret rather than just report, and the honesty to say when the data is ambiguous.

Most analytics setups have too much data and not enough narration. The fix is not more data. The fix is better questions, cleaner focus, and someone willing to stand next to the numbers and explain what they mean for the business.

Nothing on one. Nothing on two. Slight click on three. That is the whole job.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the LockPickingLawyer phrase and why does it matter for marketing analytics?
The LockPickingLawyer is a YouTube channel where the host narrates his lock-picking process in real time, describing what each pin is doing as he works. The phrase “nothing on one, nothing on two, slight click on three” is a model for how analysts should communicate data: not just reporting what happened, but narrating what each signal means and what it implies for the next decision. Most marketing analytics stops at observation. The LockPickingLawyer model pushes toward interpretation and implication, which is where the actual value sits.
Why do most marketing dashboards fail to drive decisions?
Most dashboards are built for comprehensiveness rather than clarity. They report what happened without explaining why or what to do differently. The result is a reporting artefact that gets scanned for obvious problems and then ignored. Dashboards that drive decisions tend to be built around a specific question, focus on a small number of metrics that connect to business outcomes, and include a narration layer that interprets the data rather than just displaying it.
What are the three layers of analytics communication?
The three layers are observation, interpretation, and implication. Observation is what happened. Interpretation is what it means, given the business context. Implication is what should change as a result. Most analytics setups handle observation reasonably well. Fewer reach interpretation consistently. Almost none reach implication as a regular output. The teams that get the most value from their analytics invest in all three layers, not just the first.
How does GA4 affect the narration layer in analytics?
GA4 is a more flexible and powerful tool than its predecessor, but flexibility without structure adds complexity. Common issues like duplicate conversion tracking, obscured organic keyword data, and the event-based model’s learning curve can all undermine confidence in the numbers before interpretation even begins. Getting the data hygiene right, including checking for duplicate conversions and understanding what each event actually measures, is a prerequisite for useful narration. Without clean data, even the best analyst cannot narrate with confidence.
What is the difference between reported metrics and used metrics in marketing?
Reported metrics are the ones that are easy to pull and easy to present: sessions, impressions, click-through rates. Used metrics are the ones that actually inform decisions: cost per acquisition by channel, revenue per email sent to a specific segment, conversion rate by traffic source and product category. The gap between these two sets is where most analytics value is lost. Closing the gap requires starting with a decision rather than a data pull, and building reporting around what you need to know rather than what is easy to show.

Similar Posts