Martech Selection: How to Buy Software You’ll Use

Martech selection is the process of evaluating, shortlisting, and choosing marketing technology that fits your team’s actual needs, not the needs of the vendor’s sales deck. Done well, it saves money, reduces complexity, and gives your team tools they use every day. Done badly, it produces shelfware: software that costs a fortune, gets licensed enthusiastically, and quietly dies in a browser tab nobody opens.

Most martech buying decisions go wrong before the first demo is booked. The failure isn’t in the evaluation process. It’s in the framing of the problem the tool is supposed to solve.

Key Takeaways

  • Most martech failures are caused by buying software before defining the business problem it needs to solve.
  • The total cost of a martech tool includes implementation, integration, training, and ongoing management, not just the licence fee.
  • Vendor demos are designed to impress, not to reveal limitations. Structure your evaluation to expose the gaps, not the highlights.
  • Adoption is the only metric that matters after go-live. A tool your team doesn’t use has a 0% ROI regardless of its feature set.
  • The best martech stack is the smallest one that covers your actual requirements. Complexity compounds over time.

Why Most Martech Decisions Go Wrong

I’ve sat in enough martech review meetings to recognise the pattern. Someone senior comes back from a conference or a peer conversation and mentions a platform they’ve heard good things about. The team starts a demo cycle. The vendor puts on a polished show. The features look impressive. Someone says “we could really use that” and before anyone has asked what problem it actually solves, procurement is involved and a contract is being negotiated.

The problem isn’t that the software is bad. Often it’s genuinely capable. The problem is that the buying process started with a solution rather than a problem. And when you start with a solution, you spend the entire evaluation confirming your choice rather than stress-testing it.

Martech decisions also suffer from a specific type of organisational pressure: the fear of being behind. Nobody wants to be the team that didn’t adopt the platform everyone else is using. That fear drives a lot of purchases that have nothing to do with commercial need and everything to do with competitive anxiety. I’ve seen organisations spend six figures on platforms that duplicated tools they already had, simply because the new one was getting more conference airtime.

The discipline required in martech selection isn’t technical. It’s commercial. It’s the ability to ask “what business outcome does this enable that we can’t achieve today?” and hold that question steady through a sales process specifically designed to make you forget it.

For a broader view of how technology decisions fit into the wider discipline of running a marketing function, the Marketing Operations hub covers the structural and operational context that makes these decisions land well or badly.

What Does a Martech Audit Tell You Before You Buy Anything?

Before you evaluate a single new tool, you need an honest picture of what you already have. This sounds obvious. In practice, most marketing teams don’t have one.

A martech audit is a structured inventory of every tool your team currently pays for, uses, or has access to. It maps what each tool does, who uses it, how frequently, what it costs, and whether it integrates with anything else. The results are usually uncomfortable. You’ll find tools nobody uses. You’ll find tools doing the same job. You’ll find integrations that were set up once and haven’t been maintained. You’ll find licences that auto-renewed for three years because nobody was watching the invoice.

When I was running an agency and we grew from around 20 people to closer to 100, the martech sprawl was significant. Tools that made sense for a small team stopped making sense at scale. Tools that different departments had bought independently created data silos that cost us far more to unpick than the tools themselves had ever cost. The audit wasn’t a nice-to-have. It was the only way to make rational decisions about what came next.

The audit also forces a conversation about utilisation. A tool with a 15% adoption rate inside your team is not a tool problem, it’s an onboarding or fit problem, and buying something new won’t fix it. Understanding why existing tools aren’t being used is often more instructive than any vendor demo.

The three pillars of marketing operations that MarketingProfs identified, people, process, and platforms, are a useful frame here. Technology is the last of the three, not the first. If the people and process foundations aren’t solid, new platforms just inherit the dysfunction.

How Do You Define Requirements Without Being Captured by Features?

Requirements definition is where most martech selection processes collapse. Teams either write requirements that are too vague (“we need something that improves our email marketing”) or they write requirements that are essentially a vendor’s feature list copied from a website they visited last week.

Good requirements start with use cases, not features. A use case describes a specific thing a specific person in your team needs to do, the current friction in doing it, and the measurable outcome that would improve if the friction were removed. That’s it. No feature speculation. No “wouldn’t it be nice if.” Just: who, what, why, and what better looks like.

From use cases, you can derive functional requirements. From functional requirements, you can evaluate whether a given tool meets them. This sequence matters. If you reverse it and start from features, you end up with a requirements document that any half-decent vendor can satisfy on paper, which makes shortlisting almost meaningless.

There’s a category of requirements that teams consistently underweight: integration requirements. Every tool you add needs to connect to your existing stack. The question isn’t whether the integration exists. It’s whether the integration is native or third-party, how much data it passes, how often it syncs, and what happens when it breaks. I’ve watched organisations buy platforms with impressive feature sets that turned out to be effectively isolated from the rest of their stack. The tool worked. The data didn’t flow. The use cases that justified the purchase never materialised.

Security and data handling requirements also need to be in scope from the start. GDPR and data privacy obligations affect how martech vendors can process and store your customer data. A tool that doesn’t meet your compliance requirements isn’t a candidate, regardless of its other merits. Building this into your requirements upfront saves you from getting attached to a platform that your legal team will reject at the contract stage.

How Should You Structure the Vendor Evaluation Process?

Vendor evaluations have a structural problem: they’re designed by vendors to showcase strengths and obscure weaknesses. A standard demo will show you the cleanest version of the product, with prepared data, experienced operators, and a narrative arc that ends with everything working perfectly. That is not what your team will experience on day 91 of the contract.

The way to counter this is to own the evaluation structure rather than accepting the one the vendor proposes. Specifically, this means three things.

First, give vendors your data, not theirs. Ask them to demonstrate the platform using a sample of your actual data, your actual use cases, and your actual team members operating the interface. This immediately surfaces usability issues, data compatibility problems, and workflow gaps that would never appear in a prepared demo environment.

Second, ask about failure modes. Ask what happens when the integration breaks. Ask what the support response time is for a production incident. Ask for a reference from a client who had a difficult implementation and speak to them directly. Vendors who are confident in their product will answer these questions. Vendors who aren’t will deflect.

Third, run a structured proof of concept for any shortlisted tool above a certain cost threshold. A proof of concept isn’t a pilot. It’s a time-boxed test against specific, pre-agreed success criteria. If the tool meets the criteria, it advances. If it doesn’t, it doesn’t. The criteria are set before the test begins, not after you’ve already decided you like the platform.

Team structure affects how evaluations run. How your marketing team is organised determines who needs to be involved in the evaluation, who will be the primary users, and who owns the tool post-implementation. Getting the wrong people in the room during evaluation is a reliable way to buy something that works for one function and creates problems for everyone else.

What Does Total Cost of Ownership Actually Include?

The licence fee is the smallest part of what a martech tool costs. This is one of the most consistently underestimated aspects of martech selection, and it’s where the budget surprises tend to arrive.

Total cost of ownership for a martech platform includes: the licence fee, implementation and configuration costs, integration development costs, data migration costs, training costs, ongoing management time, the cost of any adjacent tools required to make it work, and the cost of replacing it when you eventually move on. For enterprise platforms, the implementation cost alone can exceed the first year’s licence fee by a significant multiple.

Early in my career, I taught myself to code because my MD wouldn’t approve budget for a new website. That experience of working with real constraints shaped how I think about technology investment. The question was never “is this the best tool?” It was “what’s the minimum viable capability that solves the actual problem?” That instinct has served me well in martech decisions ever since. The best tool for your team is rarely the most feature-rich one. It’s the one that fits your budget, your team’s capability, and your actual use cases without requiring a six-month implementation to become functional.

Budget pressure in marketing is real and persistent. Marketing budgets are under scrutiny across most organisations, and martech spend is increasingly being asked to justify itself in commercial terms. Buying a platform because it’s impressive and then discovering the total cost is three times what was budgeted is not a career-enhancing outcome.

Build a total cost of ownership model before you shortlist. Not after. If a platform’s realistic total cost puts it outside your budget, it shouldn’t be in the evaluation at all. Falling in love with a platform you can’t actually afford is a waste of everyone’s time, including the vendor’s.

How Do You Manage Adoption After Go-Live?

Adoption is the metric that almost nobody tracks, and it’s the only one that tells you whether the investment was worthwhile. A tool with 20% team adoption has an effective ROI close to zero, regardless of what it cost or what it’s theoretically capable of.

Poor adoption has predictable causes. The tool was bought without involving the people who would use it. The implementation was rushed and the team never got proper training. The workflow the tool requires doesn’t match how the team actually works. The tool was positioned as a management initiative rather than something that makes people’s working lives easier. Any of these will kill adoption. Most failed martech implementations involve at least two of them simultaneously.

The teams I’ve seen get adoption right treat go-live as the beginning of the process, not the end. They identify internal champions who genuinely believe in the tool and give them the time and support to become experts. They run structured onboarding rather than pointing people at documentation. They track usage data and follow up with low-adoption users to understand why, rather than assuming the problem will resolve itself. And they’re willing to acknowledge when a tool isn’t working and make a decision accordingly, rather than continuing to pay for something that’s generating no value.

I’ve seen this play out at scale. At iProspect, as we grew the team and the client base, the temptation was always to add more tools to solve emerging problems. The discipline was in resisting that instinct and asking whether the problem was actually a tool gap or an adoption gap. More often than not, it was the latter. The stack didn’t need more components. It needed more of the team actually using what was already there.

There’s also a governance question that sits alongside adoption. Who owns the tool? Who is responsible for keeping it configured correctly, managing user access, handling vendor renewals, and staying current with new features? Without a named owner, martech tools drift. They get misconfigured. They fall out of compliance. They auto-renew without anyone reviewing whether they’re still needed. Ownership needs to be assigned at the point of purchase, not worked out later.

When Should You Walk Away From a Tool You’ve Already Bought?

This is the question nobody wants to ask, because it implies the original decision was wrong. But sunk cost is not a reason to continue paying for something that isn’t working. The question isn’t whether you made the right call twelve months ago. It’s whether continuing to invest in this tool is the right call today.

There are clear signals that a tool has run its course. Adoption has plateaued below a useful threshold and hasn’t responded to intervention. The vendor has been acquired and the product roadmap has stalled or changed direction. Your team’s requirements have evolved and the tool no longer covers your primary use cases. The integration landscape has changed and the tool is now isolated from the rest of your stack. The total cost of maintaining the tool exceeds the value it delivers.

Walking away requires a structured exit as much as a structured entry. Data portability needs to be confirmed before you give notice. Replacement capability needs to be in place or in progress before the current tool is switched off. Contracts need to be reviewed for notice periods and data deletion obligations. And the team needs to be informed and prepared, not surprised.

The martech industry is not static. Platforms that were category leaders five years ago have been overtaken, acquired, or made redundant by adjacent tools that expanded their scope. Treating your stack as a permanent fixture rather than a set of decisions that need periodic review is how organisations end up paying for infrastructure that no longer serves them.

Good martech selection isn’t a one-time event. It’s an ongoing discipline, and it sits within the broader operational work of running a marketing function that performs consistently. If you’re building or refining that function, the Marketing Operations section of The Marketing Juice covers the full range of decisions that sit behind effective marketing delivery.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

How many tools should a typical marketing team’s martech stack include?
There is no universal number. The right stack size depends on team size, budget, and the complexity of your marketing activity. The principle that holds across most organisations is that smaller stacks with high adoption outperform larger stacks with low adoption. If you’re adding a new tool, the question to ask first is whether an existing tool could cover the requirement with better configuration or training.
What is the most common reason martech implementations fail?
Poor adoption is the most common cause of martech failure, and it usually traces back to one of two root causes: the tool was selected without meaningful input from the people who would use it, or the implementation was treated as complete at go-live rather than as the start of an ongoing change management process. Technical implementation problems are less common than organisational ones.
How should data privacy requirements affect martech selection?
Data privacy requirements should be treated as non-negotiable filters at the shortlisting stage, not as contractual details to be resolved later. Any platform that processes customer data needs to be assessed for GDPR compliance, data residency, and data deletion capabilities before it enters serious evaluation. Discovering a compliance gap after you’ve committed to a platform is an expensive and avoidable problem.
What is the difference between a martech pilot and a proof of concept?
A pilot is a live deployment at reduced scale, typically used to test operational performance before full rollout. A proof of concept is a structured test against pre-agreed success criteria, used to validate whether a tool meets your requirements before you commit to purchasing it. For significant martech investments, a proof of concept should come before a pilot, not instead of one.
How often should a marketing team review its martech stack?
An annual review is a reasonable minimum. The review should assess adoption rates across all tools, total cost against delivered value, integration health, vendor stability, and whether the current stack still covers your primary use cases. Tools that fail the review should be exited cleanly rather than retained out of inertia. Martech contracts with auto-renewal clauses make this discipline especially important.

Similar Posts