Some marketing activities are so engrained in the system that any sceptical inquiry is considered sacrilegious. Segmentation is such an activity. So is tracking NPS. Anyone uttering a critical note about these self-evident topics should burn in hell. And, as you might have noticed recently in heated debates between marketing heavy weights Byron Sharp & Mark Ritson, tracking brand perception belongs to this category too. Below a modest attempt to cool off an overheated debate.
What’s the buzz tell me what’s a happenin’?
In a nutshell: Byron Sharp posted a blog to warn marketers about some common interpretation mistakes regarding brand perception data. Mark Ritson and Koen Pauwels react intense (see here and here). Mark Ritson claims that Sharp’s approach is “defensive to the point of being blinkered and rhetorical” and Koen Pauwel classifies Sharp’s blog as “nonsense” and “pseudo-science”.
Superficially, the debate seems to be about whether it is valuable or not to track brand perceptions. Sharps is quite clear that it is: “…if you know how the markets sees your brand you can use this knowledge to craft your advertising (and other things like packaging) and then it will work more for you and is less likely to mistakenly work for competitors.”
Sharp’s problems with tracking brand perceptions are mostly about possible misinterpretations. And there are quite a few.
You’ll find what you look for
First, Sharp warns about overinterpreting small changes. At what point changes are relevant is hard to evaluate. One must take factors such as sample noise and advertising into account. This is complex stuff, as I’m sure any expert, academic or not, would confirm.
Above, marketers expect differentiation to be relevant. So that’s what they will look for. And thus, tools are used that do exactly that, such as brand maps and significance testing. The problem with brand maps is that you have no idea of the magnitude of differences. The problem with significance tests is that it confuses noise for signal. Well documented problems, but easy to disregard as most users are marketers with little training in statistics.
Regardless of the tool you use, the greater underlying problem is that if you only look at differences you get a very one-sided and distorted view of reality. Similarities are always far more dominant than differences, but you must to take off your differentiation goggles to see them.
Worse, we are usually so obsessed with finding differences in time periods, customers, products and brands that we ignore common patterns in our data. In brand perception data you will always see two such patterns. First, bigger brands get higher perception scores on all attributes.
The causality goes both ways, Sharp & Pauwels seem to agree on that one. Pauwels suggests in the study mentioned by Ritson that sometimes perception is a leading indicator and sales a lagging indicator. Sometimes it is the other way around. And sometimes it’s just not clear. So, given the current evidence about causality, it is safe to say, ‘we don’t know’.
We are more joie de vivre
Another pattern always hidden in brand perception data is that attributes that fit the category better will attract higher scores for all brands. So, you might think that your brand is more say, ‘joie de vivre’ or ‘close to the consumer’ (I don’t make these up…), it is very likely that most consumers in the context of most categories have no idea what you are talking about. Unless you spend loads of advertising money on certain attributes you might think relevant and keep it up for a long time. Even if that eventually moves the needle, you still don’t know the relationship with sales, as it will be confounded with other factors.
Perhaps the most important thing about interpreting brand perception data is this: if you take the brand effect (bigger brands higher scores) and the category effect (some attributes get higher scores for all brands) into account, the differences between brands and between attributes are mostly trivial. Most marketers are, however, not aware of these effects.
Complicated doesn’t make it true
Sharp also warns for fancy analyses which suggest accuracy and causality, while in fact the most basic assumptions for such analyses are seldom met. E.g. regression analysis assumes that the variables on the left side on the equation, the so-called ‘drivers’ are not correlated. They always are. As a result, the impact scores—the regression coefficients—are highly unreliable. So, statements like ‘if you increase this statement by X sales will go up by Y are sheer fantasies. I don’t expect marketers to be statistical experts, but in this case a simple look up on Wikipedia will do.
People have lives
Finally, Sharp warns that marketers assume brand perceptions are used to evaluate brands, but they will be largely used for identification. All too often we assume that people have all kinds of opinions and feelings about our brands, but most people do not think much about or have feelings for most brands; they have actual lives to worry about.
I would challenge every marketer for once to ignore the usual 5-point scale and just ask an open question: “what comes to mind if you think about <your brand>?” I have done it several times and my clients didn’t particularly like it, but they usually thought it was ‘painfully insightful’…
So, my view on this is clear. But before you choose to be in favour or against a certain viewpoint, above all, I would advise you to be sceptical. Always. Don’t take my word for it. Or Sharp’s or Ritson’s, for that matter. Check your assumptions. Crunch data. Don’t only look at differences because that’s what you expect to find. Look for common patterns, look for similarities. Discuss opposing views. Then make up your mind.