Discussions about the contribution of marketing research to the broader business are as old as the discipline itself. The sentiment in these discussions is usually the same: its impact is limited; research professionals should deliver more ‘added value’, and they should act more like sparring partners to decision-makers. Interestingly, it’s not even much of a discussion: most seem to agree with what needs to be done. But despite all the words spend on this topic, there is little industry-level action to be observed.
The reason is, I believe, simple: the research industry is trying to fix the wrong problem. The issue above is often considered a communication problem: insights should be delivered more visually and put more in the business context, and above all, more ‘actionable’. I couldn’t agree more, but there is one step before delivery: use the data to its full potential. This phase is, I believe, strongly underestimated by both researchers and marketers, both agencies and clients.
Zooming in vs zooming out
Let me illustrate with an example. Imagine your average cross-table or graph. What is the first you analyse? When you don’t know upfront what you can expect, odds are you will rely on simple heuristics; that’s how the human brain works. In this situation, the heuristic is likely to be the focus on high versus low. Many research reports are structured this way: several numbers that are (significantly) higher or lower are flagged or visualised, with a plausible explanation provided.
I call this approach zooming in – individual data points are compared, and when they differ, we try to understand why. When they are similar, we move on. Zooming in can deliver relevant views, but if you don’t get beyond this, its value is limited. There is usually more to learn when you start with what I call zooming out. It means you hover above the data and try to see the bigger picture. With zooming out you ask questions like:
- Is this what I would expect?
- How do the differences relate to the similarities?
- What are the relationships between the main findings?
- What are the patterns among groups, brands, or products?
Answers to these questions lead to more fundamental learnings than just focusing on what goes up or down.
Is this normal?
I’ll use a real-life case to make this concrete. In a study for a subscription brand—let’s call it brand A—multiple brands are compared on some loyalty indicators. These indicators are lower for brand A compared to brand B and C; two brands A considers its most important competitors. The study also shows that customers of brand A switch more often to brand B or C than the other way around. Based on several characteristics of brand B and C, brand A is advised how it could get on par with its loyalty indicators and fix the balance in switching behaviour. For the client, it all sounds logical, and the suggestions are put into practice.
Is brand A going to be more successful? Well, surely not based on the advice above, as it contains two substantial interpretation mistakes. First, you can only make sense of brand indicators if you consider brand size. In the case of loyalty indicators: bigger brands have (somewhat) more loyal customers than smaller brands. This marketing law, called Double Jeopardy, has been documented for over 60 years, yet I see this type of fundamental knowledge only sporadically applied in marketing research.
In the case above, brand A was smaller than brand B and C, and the indicators were just like you would expect for a brand with this specific market share. Brand A didn’t need any fixing; it performed in line with what you would expect. In other words, all the numbers were ‘normal’. With a one-sided zooming-in mindset, you are bound to draw the wrong conclusions. There are a couple of more marketing laws like these. If you are interested, (re)read How Brands Grow; these laws should be basic knowledge for any marketer and researcher.
The second interpretation mistake in this example is that the characteristics of brand B and C tell you nothing about the potential of brand A. Even though every researcher and marketer has heard, once or twice, that you can’t infer a causal relationship from mere correlation, it is one of the most persistent flaws in the marketing (research) industry.
This case is just an illustration of a bigger underlying issue. Most of marketing research is more production than reflection. It’s quantity over quality. But its value is not measured by the number of comparisons or multivariate analyses. Its value should be measured by how it facilitates smart decision making. If the industry doesn’t take the time to apply the knowledge that has been built and documented over the last decades, it will mislead its clients on a structural basis.
Faster, cheaper, better?
To turn the tide, I believe this discipline should change its perspective. Often, research is considered a cost, not an investment. Calculations are in terms of hours, not value. This culture fits well with the zooming in approach: for the most part, you can leave it to tools and junior personnel. It’s efficient, but not effective. The zooming out approach requires knowledge and reflection. You can’t program it or outsource to a country with lower wages. It’s effective, but less efficient.
Both agencies and clients should think about which model they want to follow. Of course, everyone wants faster, cheaper, and better. But uniting these three words is, in most cases, a challenge. Faster and cheaper facilitate efficiency, while better facilitates more effectiveness. Wanting to get the most out of the data and expecting to get that delivered cheaper and faster is either naive or opportunistic.
Delivering ‘added value’ doesn’t start with better delivery, but with a better understanding of the data. If researchers and marketers want to make an impact on the business with their data, they need to learn to zoom out and identify and recognize the patterns in that data. As marketing rebel Bob Hoffman put it: “data is just a pile of bricks until someone builds a house.” So, the (rhetoric) question is: what do you want to do? Stack some bricks or build a house?
Source: Research World