Cookieless Targeting: What Works Now
Cookieless targeting means reaching audiences without relying on third-party cookies, the small tracking files that advertisers have used for decades to follow users across the web. With browsers restricting or blocking third-party cookies and regulatory pressure tightening around the world, marketers need alternative methods to identify, segment, and reach audiences at scale.
The shift is not theoretical. It is already affecting campaign performance, attribution models, and how teams think about data ownership. The marketers who are handling it well are not waiting for a single replacement technology to emerge. They are building a stack of complementary approaches that work together.
Key Takeaways
- Third-party cookies are already restricted in Firefox and Safari, and Chrome’s deprecation plans, though delayed, are still in motion. The dependency is a liability regardless of timeline.
- First-party data is the most durable asset in cookieless targeting, but most brands are not collecting it with enough intentionality to make it useful at scale.
- Contextual targeting has matured significantly and is now a legitimate primary channel, not a fallback for brands that cannot afford behavioural data.
- Privacy-preserving technologies like the Privacy Sandbox and clean rooms are not plug-and-play solutions. They require technical investment and realistic expectations about what they can and cannot do.
- Attribution in a cookieless world requires a shift from deterministic precision to probabilistic approximation, and most measurement frameworks are not ready for that shift yet.
In This Article
- Why the Cookie Problem Is Bigger Than Most Teams Think
- What First-Party Data Actually Means in Practice
- Contextual Targeting Has Grown Up
- Cohort-Based and Privacy-Preserving Technologies
- Identity Solutions and the Walled Garden Reality
- Measurement Without Cookies
- How to Prioritise When Everything Feels Urgent
Why the Cookie Problem Is Bigger Than Most Teams Think
When I was managing paid search at scale, attribution felt like a solved problem. You had a click, a cookie, a conversion. The chain was clean. Running campaigns across 30 industries taught me how much of that confidence was borrowed from a technical infrastructure that was never designed to last. Third-party cookies were a workaround, not an architecture. The industry just treated them like architecture for 25 years.
The practical consequences are already visible. Audience match rates are down across major DSPs. Retargeting pools are smaller than they were three years ago. Attribution windows are producing numbers that do not reconcile with actual revenue. These are not edge cases. They are the new baseline for any team still running a cookie-dependent stack.
The regulatory layer adds another dimension. GDPR fundamentally changed how consent works in Europe, and similar frameworks have followed in California, Brazil, India, and elsewhere. Even if Chrome’s deprecation timeline shifts again, the regulatory direction is not ambiguous. Consent requirements alone are enough to make third-party cookie dependency a structural risk, regardless of what any single browser decides.
If your marketing operations are still built around the assumption that you can track users across the open web without their explicit consent, that assumption is already broken in most jurisdictions. The question is not whether to change. It is how fast and in which direction.
For a broader view of how this fits into the way modern marketing teams are structured and run, the Marketing Operations hub covers the operational layer that holds everything together, from data infrastructure to team design.
What First-Party Data Actually Means in Practice
First-party data is data you collect directly from your own customers and prospects, with their knowledge and consent. Email addresses, purchase history, on-site behaviour, CRM records, loyalty programme data. You own it. Nobody can deprecate it. And it is, by definition, more accurate than anything inferred from cross-site tracking.
The problem is that most brands have first-party data in theory but not in practice. They have it sitting in disconnected systems, collected inconsistently, with gaps in consent documentation and no coherent identity resolution layer connecting it all. I have seen this repeatedly when stepping into agency relationships with established brands. The data exists. The infrastructure to use it does not.
Building a usable first-party data asset requires three things working together. First, a clear value exchange: you need a reason for people to share their data with you, whether that is personalisation, exclusive content, early access, or something else that is genuinely worth the trade. Second, a consistent collection mechanism: forms, login walls, preference centres, and progressive profiling that build the record over time rather than trying to capture everything at once. Third, an identity resolution layer that connects data points across touchpoints so you are building profiles, not just lists.
None of this is fast. The brands that are in the strongest position today started investing in this three or four years ago. If you are starting now, that is not a reason to delay further. It is a reason to move with more urgency, not less.
Contextual Targeting Has Grown Up
Contextual targeting, matching ads to the content of the page rather than the profile of the user, was the dominant model before behavioural advertising took over. It fell out of fashion not because it stopped working but because behavioural targeting appeared to work better, at least by the metrics the industry chose to measure.
The version of contextual targeting that exists now is substantially more sophisticated than keyword matching against page content. Modern contextual platforms use natural language processing to understand the semantic meaning of content, not just its surface keywords. They can identify emotional tone, topic clusters, and audience intent signals from the content itself, without needing to know anything about the individual reading it.
For brand safety, this is also a significant improvement. Keyword blocklists were always a blunt instrument. Blocking the word “shooting” to avoid gun-related content also blocked sports coverage and film reviews. Semantic contextual targeting can distinguish between those contexts in ways that keyword matching never could.
I am not suggesting contextual targeting replaces everything. But I have watched too many media plans dismiss it as a fallback option without testing it properly. In several campaigns I have overseen across travel and retail verticals, contextual placements performed comparably to behavioural on brand metrics and, in some cases, better on cost efficiency. The assumption that behavioural is always superior has not been rigorously tested by most teams. It has just been assumed.
Cohort-Based and Privacy-Preserving Technologies
Google’s Privacy Sandbox introduced the concept of cohort-based targeting: grouping users by shared interests or behaviours without exposing individual identities to advertisers. The idea is that you can reach people who are interested in, say, travel insurance, without knowing exactly who those people are as individuals.
The implementation has been complicated. The original FLoC proposal was replaced by the Topics API after significant criticism from privacy advocates and publishers. The Topics API assigns users to interest categories based on their browsing history, processed entirely on-device, and shares only a limited number of topics with advertisers at any given time. Advertisers get a signal. They do not get a profile.
The honest assessment is that these technologies are not yet mature enough to carry the full weight of audience targeting for most advertisers. The signal is coarser than what third-party cookies provided. Match rates are lower. The ecosystem of DSPs and publishers that support these APIs is still developing. That does not mean you should ignore them. It means you should test them with realistic expectations and build operational familiarity now, before you need them at scale.
Data clean rooms are a separate but related development. They allow two parties, typically an advertiser and a publisher or platform, to match their datasets in a privacy-preserving environment without either party exposing raw data to the other. Retail media networks have been among the early adopters, using clean rooms to give brands access to purchase data for targeting and measurement without handing over customer records. For large advertisers with strong first-party data, clean rooms represent a genuine capability. For smaller advertisers, the technical and commercial barriers are still significant.
Identity Solutions and the Walled Garden Reality
Universal ID solutions, things like Unified ID 2.0, ID5, and LiveRamp’s RampID, attempt to create a persistent, privacy-compliant identifier based on authenticated user data, typically a hashed email address. The idea is that when a user logs in somewhere and consents to data use, that identifier can be passed across participating publishers and platforms, enabling targeting and measurement without third-party cookies.
These solutions have real traction in certain environments, particularly among premium publishers with high login rates. The limitation is coverage. Universal IDs only work where users are authenticated, and across the open web, authentication rates are not high enough to make this a comprehensive solution. You end up with a patchwork: strong signal where users are logged in, no signal everywhere else.
The walled gardens, Google, Meta, Amazon, are largely unaffected by cookie deprecation within their own ecosystems because they operate on first-party login data. This is one of the structural advantages that cookie deprecation actually reinforces. If your targeting and measurement relies heavily on these platforms, the operational impact of cookie deprecation is lower. If you are running campaigns across the open web through programmatic channels, the impact is considerably higher.
This is not an argument to consolidate everything into walled gardens. Dependence on any single platform is its own risk, and the cost of that dependence tends to rise over time as the platforms extract more value from their position. But it is worth being clear-eyed about where the signal is actually reliable and where you are already working with approximation.
Measurement Without Cookies
Attribution is where the cookieless transition hits hardest. The last-click, cookie-based attribution model that most performance marketing teams have used for years is already producing unreliable results in many environments. Safari’s Intelligent Tracking Prevention has been blocking cross-site cookies since 2017. Any campaign running across Safari users has had degraded attribution for years, whether teams noticed it or not.
When I judged the Effie Awards, one of the things that struck me was how few entries could demonstrate causal impact with any rigour. Most relied on correlation and assumed attribution chains. That was before cookie deprecation accelerated the problem. The measurement gap between what teams report and what is actually happening has been widening for years, and most organisations have not updated their frameworks to reflect that.
The alternatives that are gaining ground are media mix modelling, incrementality testing, and aggregated measurement approaches. Media mix modelling uses statistical analysis of historical spend and outcome data to estimate the contribution of different channels. It does not require individual-level tracking. It requires good data and honest interpretation. Incrementality testing, running controlled experiments where you hold back a portion of your audience from seeing an ad and measure the difference in outcomes, is the most direct way to measure causal impact. Both approaches require more patience and statistical sophistication than last-click attribution, but they produce results that are actually defensible.
The shift is from deterministic measurement, where you can trace every conversion to a specific touchpoint, to probabilistic approximation, where you are making informed estimates based on patterns. That is uncomfortable for teams and clients who have been sold on the precision of digital attribution. But it is more honest about what measurement can actually tell you.
Marketing operations as a discipline is increasingly about building the infrastructure that makes this kind of measurement possible. The Marketing Operations hub covers how teams are approaching data infrastructure, measurement frameworks, and the operational decisions that determine whether marketing can actually be held accountable for business outcomes.
How to Prioritise When Everything Feels Urgent
The cookieless transition involves enough moving parts that it is easy to end up in a state of permanent evaluation, always assessing options and never committing to any of them. I have seen this happen in agencies and in-house teams alike. The pace of change in the technology layer becomes an excuse for inaction in the operational layer.
The practical sequencing that I would recommend starts with an audit of where you are actually dependent on third-party cookies right now. Not in theory. In your actual live campaigns, your attribution setup, your retargeting audiences, your lookalike models. Map the dependency before you try to solve it.
From there, the highest-return investment for most organisations is first-party data infrastructure. Not because it solves every problem, but because it compounds. Every customer interaction that captures consented, usable data makes the next campaign slightly more effective. The brands that have been building this consistently for several years are not just better positioned for cookie deprecation. They have a fundamentally better understanding of their customers.
Contextual targeting is worth testing in parallel, not as a replacement for everything else but as a channel that can perform well without audience data and that is worth understanding in your specific category. How your brand team is structured will affect how easily you can integrate contextual into your planning process, since it requires closer collaboration between media and content than behavioural targeting typically does.
Measurement reform is the hardest conversation to have internally because it requires telling stakeholders that the numbers they have been seeing may not mean what they thought. But it is the most important one. Running campaigns without reliable measurement is not a sustainable position. Understanding how users actually behave on your own properties is a starting point for building measurement that does not depend on cross-site tracking.
The teams that are handling this well are not the ones with the most sophisticated technology. They are the ones that have been honest about their dependencies, made deliberate choices about where to invest, and built enough internal capability to evaluate vendors without being sold solutions they do not need. That combination of clarity and scepticism is harder to build than any technology stack, but it is more durable.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
