Hoe-werkt-succes-verhaaltjes: trap er niet in
January 16, 2018
Why the obsession with ‘insights’ leads marketers astray
March 6, 2018
Show all

Why ‘fact based marketing’ leads marketers astray

At this day and age, we measure lots of things to facilitate decision making. In principle, this is better than not measuring; however, the translation from measured ‘facts’ to ‘smart decisions’ is far from self-evident. With three cases, I illustrate how a mere ‘fact based’ approach can lead marketers astray. At the end I explain why an ‘evidence based’ approach is much more effective.

Case 1: churn modelling in utility services | A utility service observes that the number of customers leaving each year has increased from 8% to 10% in three years. The marketing intelligence team builds a churn model: it predicts who is likely to leave and who isn’t. The model looks promising: it identifies more than half of the customers that leave in a 12-month period. The marketing team is convinced this model will help to manage loyalty: potential churners get an attractive offer to seduce them to stay. After running this programme for 18 months, however, the churn rate turns out be slightly higher than before.

Now let’s put this case in perspective. First, the increased churn should be evaluated on a category level: did other utility brands show the same pattern? If so, the decreased loyalty of your customers may reflect a category issue. Perhaps new players entered the market and the cake was divided among more brands.

Very likely, the loyalty KPI is in line with the Double Jeopardy Law. This law shows that, with few exceptions, brands with lower market share have (1) far fewer buyers and (2) (slightly) lower brand loyalty. The implication of this law is vital: as a marketer, you don’t have a lot of influence on your loyalty KPI’s. If you want to increase loyalty—without losing profit—you need to increase market share.

That is only the theoretical part. Now let’s have a look at the model itself. At first sight, it’s an impressive model: it identifies more than half the customers that leave the brand. But that is only one side of the coin. Equally important is the number of customers your model identifies as churners but stay. Usually there are a lot of such ‘false positives’ in models like this. In this case it means you are throwing discounts at customers that weren’t planning to leave anyway. That’s a very costly intervention. In summary, you can only estimate the commercial effects of your model by understanding the consequences of both correct and incorrect classifications.

Case 2: NPD in dairy | More than half of new product launches fail at a dairy brand. As failures are expensive, management decides to take a data driven approach to uplift the success rate. They study the 5% most successful product introductions in their market—over 300 in total—from the last five years.

The company finds that the successful cases have some striking things in common. The large majority (84%) are light or reduced sugar/fat/calorie products. The company decides to focus their new product innovations on light products. After three years they conclude that the failure rate is the same. Managers are obviously disappointed and wonder why their insight hasn’t pay off.

At first sight the reasoning of the NPD team makes sense: look for common success factors. But you need include failures too if you want to learn what drives success. Imagine that the percentage of light products are the same for both successful and failures. That would show that there was no benefit in being light.

Looking only at successes can lead to very misleading conclusions. Although this mistake is very well documented, many commercial managers and even trained scientists make this type of error. In fact, a number of well-known business books suffer from this bias (e.g. this one, this one and this one). In this case, the NPD study should have included all introductions to make an objective evaluation.

Case 3: Segmentation at a telco | From an extensive analysis on their customer database, a telco operator learns that there are two main drivers that predict the customers commercial potential. The first is the number of apps they use on their phone. The second is the average spend outside the subscription fee. From these two dimensions the company distinguishes four distinct customer profiles. They compare the segments on 40 attitudinal statements. In total, 15 statistically significant effects are found and reported. These differences form the main input for a more targeted cross- and upsell program. After a year, the company abandons the program as it doesn’t show any difference compared to an untargeted approach.

What did they miss? The research team used the common approach of selecting statistically significant differences to identify material factors for action. Their mistake was to ignore natural statistical variation or ‘noise’. With the usual criterion of 95% confidence, flagging noise happens once every twenty tests.

Given that there were 40 statements to be tested and 4 segments to be compared with each other, there were 40 x 6* = 240 tests performed. Chance alone will cause 240/20=12 tests to be statistically significant. This number is very close to the number of differences they identified (15). As in the first case, false positives are ignored. Putting the ‘facts’ in perspective, the correct conclusion should have been ‘the differences in attitudes between the four segments are negligible’.

*Comparisons: A with B, A with C, A with D, B with C, B with D, C with D.

Making the shift | Three very different cases; what they have in common is that ‘the facts’ led to poor decisions. There is nothing wrong with the data. But biases and fallacies are overlooked, and scientific knowledge is ignored. It is this embedding of data and analysis in a wider context that marks the critical distinction between a ‘fact based’ and ‘an evidence’ based approach.

With a ‘fact based’ approach we tend to conceive the world as a place that can be understood by measuring lots of stuff; we believe the facts speak for themselves. With an ‘evidence based’ approach we acknowledge the world is a complex place which can only be understood with lots of effort, perseverance and thoroughly studying patterns in behaviour.

An evidence based approach stimulates a continuous critical evaluation of inputs and outputs, cause and effect. It means being aware of the biases in your data and in your head, using the knowledge that’s out there while still being receptive to new information. It means constantly testing one’s assumptions and being aware that your data might be ‘hard’, but your processes and decisions along the way are always ‘soft’. It means acknowledging from time to time that the available data just doesn’t provide the answers.

You will not read it in the next ‘8 steps to success’ or ’12 things thought leaders have in common’ article, but in marketing intelligence it pays off to be critical, reflective, and modest.

Comments are closed.