Financial Services Market Research: What the Data Won’t Tell You
Financial services market research is the discipline of gathering and analysing information about customers, competitors, and market conditions within banking, insurance, wealth management, and related sectors. Done well, it reduces the cost of being wrong. Done poorly, it produces expensive reports that confirm what leadership already believed and gather dust in a shared drive.
The sector has genuine complexity that makes research harder than it looks. Regulatory constraints shape what you can say and to whom. Trust is the primary currency, and it is slow to build and fast to lose. Products are often abstract, emotionally loaded, and bought infrequently. Standard research frameworks built for FMCG or retail do not translate cleanly, and the firms that treat them as if they do tend to learn that lesson expensively.
Key Takeaways
- Financial services research fails most often not because of bad data, but because the questions were framed to validate rather than interrogate.
- Regulatory and compliance constraints in financial services are not just legal hurdles , they are research variables that shape what customers actually want and expect.
- Stated preference and revealed preference diverge sharply in financial services, where social desirability bias and financial anxiety distort survey responses significantly.
- Competitive intelligence in financial services requires looking beyond direct competitors to adjacent players, including fintechs and embedded finance providers, that are redefining category expectations.
- The most commercially useful research in this sector connects insight directly to a specific business decision, not to a general understanding of the market.
In This Article
- Why Financial Services Research Is a Different Problem
- The Stated vs Revealed Preference Problem
- The Stated vs Revealed Preference Problem
- How Regulatory Context Shapes Research Design
- Competitive Intelligence in a Category Being Redefined
- Segmentation That Actually Holds Up
- The Conversion Question Research Usually Misses
- What Good Research Governance Looks Like
Why Financial Services Research Is a Different Problem
I have worked across roughly 30 industries during my career, and financial services is consistently one of the most research-hungry and simultaneously research-resistant sectors I have encountered. The appetite for data is real. The willingness to act on findings that challenge existing assumptions is considerably rarer.
Part of this is structural. Financial services firms are heavily regulated, which creates a culture of caution that extends well beyond compliance into marketing and strategy. When the cost of being wrong in a regulated context includes fines, sanctions, and reputational damage, organisations naturally become risk-averse. Research becomes a tool for risk reduction rather than opportunity identification, and that orientation shapes everything from the questions asked to the way findings are presented to leadership.
There is also a product complexity problem. A mortgage, a pension, or a commercial insurance policy is not something most customers understand in detail. They are buying outcomes and reassurance, not features. This creates a persistent gap between what customers say they want in a research context and what actually drives their decisions. A customer in a focus group will tell you they want transparency and low fees. The same customer will choose a product based on brand familiarity and the confidence of the person who sold it to them. Both things are true. Research that captures only one of them is incomplete.
If you want a broader frame for how market research fits into commercial planning, the Market Research and Competitive Intel hub covers the discipline across sectors and methodologies. What follows is specific to financial services, where several of the standard assumptions need revisiting.
The Stated vs Revealed Preference Problem
The Stated vs Revealed Preference Problem
Ask someone what they value in a current account and they will tell you: low fees, good customer service, a decent mobile app. Ask them which account they actually have and why they opened it, and you will often get a very different story. They opened it because their employer required it, or their parents banked there, or the branch was near their university. The stated preferences are real preferences, but they are not the preferences that drove the decision.
This gap between stated and revealed preference is a problem in most categories. In financial services, it is particularly pronounced for two reasons. First, financial decisions carry emotional weight that people are reluctant to expose in research settings. Admitting that you chose a mortgage provider because you felt intimidated by the other options is not something most people will say in a survey or a focus group. Second, many financial decisions are made under conditions of low knowledge and high anxiety, which means the actual drivers are often unconscious or difficult to articulate.
The practical implication is that quantitative surveys asking customers to rate and rank product attributes will systematically overweight rational, socially acceptable preferences and underweight the emotional and contextual factors that actually matter. This does not mean surveys are useless. It means they need to be designed with this bias in mind, and their findings need to be triangulated against behavioural data wherever possible.
Behavioural data in financial services is rich and often underused. Application completion rates, drop-off points in digital journeys, call centre transcripts, complaint patterns, and product usage data all tell you what customers actually do. When I was working with a financial services client whose new product was underperforming against forecast, the survey data suggested the problem was price. The application funnel data told a different story: customers were dropping out at the income verification step, not the pricing page. The research had pointed us at the wrong problem. The behavioural data pointed us at the right one.
How Regulatory Context Shapes Research Design
Regulation is not just a constraint on what you can say in a financial services marketing context. It is a variable that shapes customer expectations, competitive dynamics, and the meaning of the data you collect.
Consider the impact of Consumer Duty in the UK, which requires firms to demonstrate that their products and services deliver good outcomes for customers. This is not just a compliance requirement. It changes what customers expect from financial services firms, what they notice when those expectations are not met, and how they talk about their experiences. Research conducted before Consumer Duty was implemented will produce different baseline data than research conducted after it, even if the questions are identical. The regulatory environment has shifted the frame.
Similarly, open banking regulations have created new data-sharing possibilities that change both the competitive landscape and the research possibilities available to firms. The ability to access consented transaction data with customer permission opens up behavioural research approaches that were not previously viable at scale. Firms that understand this are building research capabilities that go well beyond traditional survey and focus group methodologies.
The compliance function also shapes research design in ways that are sometimes counterproductive. I have seen research briefs where the legal and compliance team had reviewed the discussion guide before fieldwork and removed several of the most interesting questions on the grounds that the answers might create regulatory risk. The intent is understandable. The effect is research that cannot surface the insights that matter most. Getting compliance and research to work together rather than in sequence is a structural challenge worth solving early.
Competitive Intelligence in a Category Being Redefined
Traditional competitive analysis in financial services focused on a defined set of known competitors: the major banks, the established insurers, the large asset managers. The competitive set was stable and the comparison dimensions were relatively clear: rates, fees, product range, branch network, brand reputation.
That stability has gone. The relevant competitive set for most financial services firms now includes players who were not in the category five years ago, operating with different cost structures, different regulatory obligations in some cases, and very different customer expectations. A high street bank competing for current accounts is not just competing with other high street banks. It is competing with digital challengers who have set a new baseline for mobile experience, and with embedded finance providers who are inserting financial products into non-financial contexts where the incumbent was not even looking.
BCG has written usefully about the challenge of moving into new markets and the strategic orientation required to compete across different speeds of change. The financial services competitive landscape is a version of this problem: established players operating on one set of timelines and cost structures, new entrants operating on another, with customers increasingly making comparisons across both.
Competitive intelligence in this environment needs to track three things simultaneously. First, direct competitors and their product, pricing, and positioning moves. Second, the adjacent players who are setting new customer expectations even if they are not yet direct substitutes. Third, the customer expectation data that shows where the category benchmark is moving, regardless of who is moving it. Firms that only track direct competitors are measuring their position in a map that is becoming less accurate over time.
Segmentation That Actually Holds Up
Financial services has a long tradition of demographic segmentation: age, income, life stage, occupation. These variables are not useless. They correlate with product needs and financial behaviours in ways that are commercially relevant. But they are a starting point, not an endpoint, and firms that treat demographic segments as the primary organising principle for their research and marketing tend to produce work that is accurate on average and wrong about most individuals.
The more useful segmentation dimensions in financial services are attitudinal and behavioural. How does a customer relate to financial risk? What is their level of financial confidence and literacy? How much do they trust financial institutions as a category, and what has shaped that trust? These dimensions cut across demographic groups in ways that matter for product design, communication, and service model.
A 45-year-old with a household income of £80,000 who has high financial anxiety and low institutional trust requires a fundamentally different approach than a 45-year-old with the same income who is financially confident and actively engaged with their portfolio. Demographic segmentation puts them in the same box. Attitudinal segmentation separates them correctly.
Building attitudinal segments requires qualitative depth work to identify the relevant dimensions, followed by quantitative research to size and profile the segments, followed by the harder work of connecting segment membership to data signals that can be used operationally. That last step is where most segmentation projects stall. A segment is only commercially useful if you can identify who is in it and reach them with something relevant. Research that produces elegant segments with no operational pathway is a strategy document, not a business tool.
The Conversion Question Research Usually Misses
Most financial services market research is oriented toward awareness, perception, and preference. These are legitimate things to measure. They are not, however, the full picture of what drives commercial outcomes.
The conversion question, specifically what moves a customer from consideration to action, is frequently under-researched in financial services. This is partly because conversion is harder to study than preference. It requires understanding a decision process that often unfolds over weeks or months, involves multiple touchpoints and influencers, and is frequently triggered by a life event rather than a marketing stimulus.
When I was at lastminute.com, the speed of the conversion signal was almost immediate. You ran a campaign, you saw revenue move. Financial services rarely works like that, and the slower feedback loop creates a temptation to focus research on the things that are easier to measure rather than the things that actually matter. Understanding conversion as a discipline matters in every sector, but in financial services the complexity of the customer experience makes it especially important to research the decision process, not just the decision outcome.
The most useful conversion research I have seen in financial services combines three things: qualitative interviews with recent purchasers and recent decliners, analysis of digital experience data to identify where customers progress and where they exit, and longitudinal tracking that captures the full decision timeline rather than a single moment. Each of these alone is partial. Together they give you a credible account of what is actually happening.
What Good Research Governance Looks Like
Financial services firms spend significant amounts on market research and frequently get less value from it than they should. In my experience, the problem is rarely the quality of the fieldwork. It is the governance around how research is commissioned, used, and connected to decisions.
Research commissioned without a clear decision it is meant to inform tends to produce findings that are interesting but not actionable. I have a simple test I apply when reviewing a research brief: if the findings came back and showed the exact opposite of what we expect, would we do something different? If the answer is no, the research is not connected to a real decision and the budget would be better spent elsewhere.
Good research governance also requires honest engagement with findings that are uncomfortable. Financial services organisations, like most large organisations, have a tendency to commission research that confirms strategic direction rather than tests it. When findings challenge the direction, there is a well-worn path through which they get reinterpreted, contextualised, or simply not acted on. I have sat in enough debrief meetings to recognise the moment when a room decides collectively to not quite hear what the research is telling them.
The firms that get the most value from research are the ones where leadership has a genuine tolerance for findings that challenge their assumptions. That is a cultural condition, not a methodological one, and it cannot be solved by better research design alone. It requires the people commissioning research to be honest about what they are willing to do with the answers before they ask the questions.
There is more on building research processes that connect to real commercial decisions across the Market Research and Competitive Intel section of the site, covering both methodology and the organisational conditions that make research useful rather than decorative.
About the Author
Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.
