Forrester just released a new report called How Customer Experience Drives Business Growth, 2019. In a nutshell, the study claims to be able to calculate the financial value of improving the CX index by one point based on survey data. This is, as I will argue below, sheer bollocks.
It’s not my intention to specifically pick on Forrester. I want to use this study as an illustration to argue that the claims typically made in studies like this are unjustified at best and misleading at worst. Every day, bold claims are made based on ‘studies’ that demonstrate flawed research design and flawed logic.
There are at least four serious issues with observational studies like this.
1. Assuming causality
We are easily impressed by correlation and regression coefficients. We also easily accept the suggested direction of the relationship. In this case, better CX leads to higher loyalty. Could it be the other way around? It might be hard to imagine how higher loyalty leads to better CX. Be aware, however, that the measures used in this study are perceptions. It is not such a strange thought that people that say they ‘intend to stay with a brand, buy more from a brand, and recommend a brand’, have more positive perceptions of how the brand performs. And yes, that also means the perception about CX. In other words, it could be both ways.
More importantly, there are other variables that can explain the correlation between CX and loyalty. First and foremost, brand size: bigger brands are more likely to spend more money on CX simply because they can afford it. I’m not trying to deny CX has no impact on the brand (I believe it does), but just measuring two variables and claim a causal relationship is simply naive.
2. Size matters
Whether bigger brands can spend more on CX (or other efforts) or not, half a century of academic research shows that loyalty metrics can be predicted from brand size. Bigger brands consistently show (slightly) higher loyalty. This observation is called the Law of Double Jeopardy, first discovered by Andrew Ehrenberg in the late fifties of the previous century and still found today in any category.
This law implies, among others, that you can only make sense of brand metrics (or relationships between brand metrics), if you to take brand size into account. It puzzles me that after a decade Professor Byron Sharp caused a big stir in the marketing community with How Brands Grow, most marketers are still not aware of the few fundamental laws marketing holds.
3. Intentions vs. behaviour
In marketing it is often assumed that claimed behaviour and intentions correlate strongly with actual behaviour, but the relationship is modest at best. In the CX & loyalty study it is more likely that the alleged loyalty metric is simply measuring the degree of having positive experiences with the brand. No surprise such a metric correlates with another brand perception metric, the CX index.
The formula ‘when X goes up 1 point, Y goes up .. points’ sounds even more impressive than a correlation or regression coefficient, but it is basically the same thing. To evaluate the reliability of the model, however, you must look under the hood. The (made up) exhibits below visualize the basic idea behind a model.
The figure on the left is a visual representation of the formula itself: if X goes up, so does Y. The figures in the middle and at the right show the raw data behind the model line. They show similar slopes, but you can easily see the model in the middle is far better. But even this better model (correlation 0.7) is still noisy: lots of data points are quite far down or up the model line (the estimate). Most models are like this: they provide estimations, but the level of noise is still substantial.
Marketers are often overly optimistic about the precision of their measures, but once you dig in you will discover that lots of data sources are quite noisy.
As a reader you need to be aware that, as with any model, ceteris paribus applies. In other words, the model holds considering all other things equal. But all things will not be equal. Most importantly, every brand works on CX (among others). Therefore, assumed that CX creates more value, improving CX is not a guaranteed approach to brand growth, but more likely a way of not falling behind your competitors. This is, of course, not a flaw in the study, but it is a fundamental element to understand for anyone that wants to implement a model.
A rule of thumb
Most of us agree that measuring stuff helps us to understand the world. But even if our measures are reliable and valid, there is a lot that can and does go wrong with the interpretation of those measures. Again, it’s not my intention to discredit any company or disqualify all survey-based or non-experimental studies. Not all studies suffer the same flaws, either. However, a lot of studies from consultancies, media agencies and research outfits show a thorough gap between the research design and the boldness of claims derived from it.
My advice: the bolder the claim, the more sceptical you should be.
You don’t have to be a research expert to determine the credibility of a claim. Just don’t take anything for granted and ask questions. Here is a list of simple but fundamental questions that will help you separate the wheat from the chaff:
- Is the claim based on data? If no, be sceptical. If yes:
- Does the data come from a survey? If yes, be sceptical. Surveys are useful in many situations, but not for predicting brand growth.
- Does the source potentially benefit from the claim? If yes, be sceptical.
- Does the claim involve causality? If yes, was an experiment/RCT part of the research design? If no, be sceptical.
- Is the design of the study transparent? If no, be sceptical.
- Is the claim in line with other sources? If no, be sceptical.