The Forrester Wave on Marketing Automation: What It Measures

The Forrester Wave on marketing automation is a vendor evaluation framework that scores platforms across current offering, strategy, and market presence. It is one of the most referenced analyst reports in enterprise martech buying decisions, and it shapes shortlists, procurement conversations, and board-level technology discussions whether marketers realise it or not.

But a Wave report is a snapshot. It reflects how vendors performed against a specific set of criteria at a specific point in time. Knowing how to read it, and more importantly what it cannot tell you, is what separates a good technology decision from an expensive one.

Key Takeaways

  • The Forrester Wave scores vendors on current offering, strategy, and market presence, not on fit for your specific business model or team maturity.
  • Leaders in the Wave are not automatically the right choice. Mid-tier platforms frequently outperform Leaders in real-world deployments for mid-market organisations.
  • Forrester’s criteria weight enterprise capability heavily, which means smaller or faster-growing businesses are often evaluating the wrong axis entirely.
  • The most useful way to use the Wave is as a starting point for your shortlist, not as a final answer. Your use case, integration requirements, and team capability matter more than analyst positioning.
  • Wave reports are updated periodically, and the gap between publication and your buying decision can be 12 to 18 months. Platform capabilities shift faster than the report cycle.

What the Forrester Wave Actually Evaluates

Forrester evaluates marketing automation platforms across three broad dimensions. Current offering covers the depth and breadth of features available today. Strategy looks at product roadmap, vision, and commercial direction. Market presence considers revenue, customer count, and ecosystem scale.

Within current offering, Forrester typically assesses capabilities like lead management, campaign orchestration, analytics and reporting, AI-driven features, native integrations, and ease of use. These criteria are weighted, and the weightings shift between Wave iterations, which is one reason a vendor can drop in the rankings without changing its product significantly.

The methodology is rigorous by analyst standards. Vendors submit detailed responses, Forrester conducts briefings and demos, and customer reference checks are part of the process. That said, vendors invest considerable resource in their Wave submissions. The largest platforms have dedicated analyst relations teams whose job is to present their capabilities in the best possible light. That is not a criticism of Forrester. It is just the commercial reality of how these evaluations work.

If you want a broader foundation for understanding how marketing automation platforms are structured and what they are designed to do, the Marketing Automation hub at The Marketing Juice covers the category from first principles through to vendor selection and deployment.

Who Sits in the Leaders Quadrant and Why It Matters Less Than You Think

The Leaders quadrant in recent Forrester Wave evaluations for marketing automation has consistently included the major enterprise platforms: Adobe Marketo Engage, Salesforce Marketing Cloud Account Engagement (formerly Pardot), Oracle Eloqua, and HubSpot Marketing Hub at the enterprise tier. SAP Emarsys and Braze have featured prominently in evaluations focused on B2C and cross-channel engagement.

Being a Leader means a platform scored highly across Forrester’s weighted criteria set. It does not mean it will work well for your organisation. I have seen this play out more times than I care to count. When I was running an agency and helping clients through technology selection, the instinct was always to reach for the Leader. It felt safe. It was defensible in a board presentation. But defensible and right are not the same thing.

One client, a mid-sized B2B technology company, went through a full procurement process and landed on a recognised Wave Leader. Eighteen months later, they had a platform they were using at roughly 20% of its capability, a CRM integration that had never fully worked, and a marketing team that had spent more time in implementation than in market. The platform was not the problem. The fit was.

The Strong Performers and Contenders categories in the Wave often contain platforms that are genuinely excellent for specific use cases. Platforms like ActiveCampaign, Klaviyo, and Iterable sit in these tiers in various evaluations, and for e-commerce, SMB, or lifecycle-focused marketing teams, they frequently outperform the Leaders in practical deployment. Wistia’s overview of Marketo’s automation capabilities gives a useful sense of what enterprise-tier feature depth actually looks like in practice, which helps calibrate whether that level of complexity is something your team can absorb.

How Forrester’s Criteria Are Weighted and What That Means for Your Shortlist

Forrester publishes its evaluation criteria and weightings as part of each Wave report. This is worth reading carefully before you use the report to build a shortlist. The criteria weightings tell you what Forrester considers important for the category as a whole, and those weightings reflect enterprise buying patterns more than mid-market or growth-stage ones.

Enterprise marketing automation evaluations tend to weight heavily on account-based marketing capability, complex multi-touch attribution, deep CRM integration, and scalability across large contact databases. If you are a 50-person company with a 100,000-contact database running relatively linear nurture sequences, those criteria are largely irrelevant to your buying decision. You are reading a report optimised for a different buyer.

This is not a flaw in the Forrester methodology. It is a feature of how analyst reports work. They are designed to serve the largest segment of their audience, which in enterprise software is typically the Global 2000. The report is still useful. You just need to apply a filter before you use it.

A practical approach is to take the Wave’s current offering scores and map them against the specific capabilities you actually need. If AI-driven lead scoring is a core requirement, weight it heavily in your own evaluation. If you have no immediate need for predictive analytics, discount it. Build your own weighted scorecard using Forrester’s criteria as a starting point, not as a final answer. Forrester’s own commentary on automation use cases gives useful context on how they think about the category beyond the Wave itself.

The Gap Between Wave Publication and Your Buying Decision

Forrester publishes its marketing automation Wave reports on a roughly 18 to 24-month cycle. The evaluation process itself takes several months before publication. By the time you are reading a Wave report and using it to inform a buying decision, the data underpinning it could be 12 to 18 months old.

In a category moving as fast as marketing automation, that gap matters. AI-driven features in particular have changed significantly across most major platforms in the past two years. HubSpot’s documentation on AI in marketing automation reflects how quickly native AI capabilities are being embedded into platforms that previously relied on third-party integrations for that functionality. A platform that scored poorly on AI capability in a Wave published 18 months ago may look very different today.

The right way to handle this is to use the Wave as a starting framework and then supplement it with current product demos, recent G2 or Gartner Peer Insights reviews, and direct conversations with vendors about what has shipped since the last evaluation. Analyst reports are a lens, not a live feed.

I spent a period judging the Effie Awards, where the question was always whether the work drove real business outcomes rather than just looking impressive on a submission form. The same discipline applies here. A strong Wave positioning is an output of a vendor’s ability to perform well in an analyst evaluation process. What you need to know is whether the platform drives outcomes for your specific marketing operation.

What the Wave Does Not Measure

The Forrester Wave does not measure implementation quality, onboarding experience, or the real cost of getting a platform to production. These are often the factors that determine whether a marketing automation investment succeeds or fails.

It does not measure how well a platform integrates with your specific CRM, data warehouse, or CDP. Integration capability is scored in aggregate terms, not against your particular stack. A platform that integrates cleanly with Salesforce may require significant custom development to connect with a legacy ERP system that half your business runs on.

It does not measure vendor support quality at the tier you are buying. Enterprise support agreements and SMB support agreements from the same vendor can be substantially different experiences. Reference checks with customers at your contract level, not just enterprise reference accounts, are essential.

And it does not measure the commercial terms you will actually be offered. Pricing in marketing automation is notoriously opaque. List prices bear little resemblance to contracted prices for most buyers. The Wave tells you nothing about negotiating leverage, contract flexibility, or the total cost of ownership once professional services, training, and add-on modules are included. This Unbounce discussion on the limits of marketing automation captures some of the practical frustrations that do not appear in analyst evaluations.

How to Use the Forrester Wave Intelligently in a Buying Process

The most effective way to use the Forrester Wave in a platform selection process is as a structured starting point. It gives you a defensible shortlist rationale, a set of evaluation criteria you can adapt, and a benchmark for where vendors sit relative to each other on specific capability dimensions.

Start by reading the full report, not just the graphic. The narrative sections contain detail on where each vendor is strong and where Forrester sees gaps. A vendor that scores highly overall may have a specific weakness in the capability that matters most to you. That information is in the text, not in the quadrant position.

Build your own evaluation matrix. Take Forrester’s criteria, weight them according to your actual requirements, and score vendors against your specific use cases. Include criteria the Wave does not cover: implementation timeline, support model, pricing transparency, and integration with your existing stack.

Run structured demos against a fixed scenario. Give every vendor the same brief: a specific campaign workflow, a defined lead scoring requirement, and a reporting output you need to produce. Evaluate them against the same task, not against their own demo scripts. I have been through enough platform selections to know that vendor-led demos are optimised to show what works, not what does not. The structured scenario approach surfaces the gaps.

Early in my career, when I was learning to build things myself rather than waiting for budget approval, I developed a habit of testing platforms under realistic conditions rather than ideal ones. The same instinct applies to technology evaluation. Put the platform under the conditions it will actually face, not the conditions the vendor has prepared for. Mailchimp’s documentation on automation flows is a useful reference for understanding what a simpler platform can deliver before you commit to enterprise-tier complexity.

Check references at your scale. Ask vendors for three customer references at a similar company size, similar use case, and similar integration environment. Ask those references specifically about implementation timeline, what did not work as expected, and what they would do differently. You will learn more from those conversations than from any analyst report.

The Vendors Worth Watching Outside the Leaders Quadrant

Strong Performers and Contenders in the Forrester Wave are not consolation prizes. They are often the right answer for a significant proportion of buyers who are not enterprise-scale or who have specific use-case requirements that the Leaders do not serve well.

In B2C and e-commerce contexts, platforms like Klaviyo and Iterable have built genuinely strong capabilities around behavioural triggers, personalisation at scale, and revenue attribution that map directly to how e-commerce marketing teams actually work. They are not trying to be Eloqua. They are trying to be excellent at a specific set of problems, and for those problems they frequently are.

In the B2B mid-market, platforms like ActiveCampaign and Ortto offer a combination of automation depth and usability that enterprise platforms struggle to match at their price point. The trade-off is usually in reporting sophistication and enterprise integration capability, but for a marketing team of five to fifteen people running a defined set of nurture and lifecycle programmes, that trade-off is often worth making.

MarketingProfs’ foundational piece on what marketing automation is and why it matters is a useful reminder that the category was built around a relatively simple set of problems: automating repetitive marketing tasks and improving lead management. Not every organisation needs a platform that has evolved far beyond those origins.

Video engagement tracking is an increasingly relevant capability as more marketing teams build content programmes that include video. Vidyard’s integration with marketing automation platforms illustrates how the ecosystem around the core platforms is expanding, and how capability gaps in a Wave Leader can sometimes be filled by best-of-breed point solutions rather than by switching platforms entirely.

What a Wave Report Cannot Replace

No analyst report can replace an honest internal assessment of your marketing operation’s maturity and your team’s capacity to implement and run a new platform. This is the variable that most technology buying processes underweight, and it is the one that most often determines whether an investment succeeds.

When I was growing an agency from 20 to 100 people, technology decisions that looked straightforward at 20 people became genuinely complex at 100. The platforms that worked well at smaller scale needed replacing, and the replacement decisions were as much about team capability and process maturity as they were about feature sets. A Wave Leader that requires a dedicated marketing operations resource to run effectively is not the right choice for a team that does not have that resource and has no plan to hire one.

The Forrester Wave is a useful tool. It reflects genuine analytical rigour applied to a complex vendor landscape. But it is one input into a decision that needs to be grounded in your specific commercial context, your team’s capability, and your integration environment. Use it as a lens, apply your own filter, and make the decision based on what will actually work for your organisation rather than what looks best in a procurement presentation.

For a wider view of how marketing automation platforms fit into a broader technology and strategy context, the Marketing Automation hub covers everything from platform selection through to measurement and optimisation.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

What is the Forrester Wave for marketing automation?
The Forrester Wave for marketing automation is an analyst evaluation report that scores marketing automation platforms across three dimensions: current offering, strategy, and market presence. Vendors are placed into Leaders, Strong Performers, Contenders, or Challengers categories based on their weighted scores. The report is used by enterprise buyers as a reference point during platform selection and procurement processes.
How often does Forrester update its marketing automation Wave report?
Forrester typically updates its marketing automation Wave report on an 18 to 24-month cycle. The evaluation process itself takes several months before publication, which means the underlying data can be 12 to 18 months old by the time a buyer uses the report. Given how quickly platform capabilities are evolving, particularly around AI features, it is worth supplementing the Wave with current product demos and recent peer reviews.
Should I only consider Leaders in the Forrester Wave when selecting a marketing automation platform?
No. Leaders in the Forrester Wave are scored highly against criteria that weight enterprise capability heavily. For mid-market, SMB, or use-case-specific requirements, Strong Performers and Contenders often represent a better fit. Platforms like Klaviyo, ActiveCampaign, and Iterable sit outside the Leaders quadrant in various evaluations but outperform Leaders in specific deployment contexts. The Wave is a starting point for a shortlist, not a final answer.
What does the Forrester Wave not measure in marketing automation platforms?
The Forrester Wave does not measure implementation quality, onboarding experience, real-world integration performance with your specific tech stack, vendor support quality at your contract tier, or total cost of ownership including professional services and add-on modules. These are often the factors that determine whether a platform investment succeeds or fails, and they require separate due diligence through reference checks, structured demos, and direct commercial negotiation.
How should I use the Forrester Wave in a marketing automation buying process?
Use the Forrester Wave to build a defensible initial shortlist and to understand the evaluation criteria that matter across the category. Read the full report narrative, not just the quadrant graphic. Then build your own weighted scorecard based on your specific requirements, run structured demos against a fixed scenario, and conduct reference checks with customers at your scale and use case. The Wave is one input into the decision, not the decision itself.

Similar Posts