Data driven decision making: fifty shades of gray
April 19, 2018
Ritson, Sharp & brand perception: some arguments to cool down a bit
August 20, 2018
Show all

The Yin & Yang of data driven decision making

We have made exceptional progress in analytics over the last decade. I observe, however, a serious risk: too much trust in data and models. An ode to the soft side of marketing intelligence.

More than data crunching

Developments in big data, machine learning, and AI are impressive. Predictions are better than they were 10 years ago. Bigger and more complex data sources are now being employed. Developments in data visualisation have contributed to understanding complex information. The list goes on.

With all these developments in data science, however, making smart decisions still requires a lot more than data crunching. In most situations in which you try to understand data, you still need to sit down and contemplate, make sense of it all, try to explain conflicting pieces of information or just have a couple of chats with customers or stakeholders for contextual understanding. For every marketing intelligence professional, it is essential to develop these soft skills.

Opposite forces

Some rely too much on ‘intelligence’ and undervalue common sense, existing knowledge, and the importance of context. Some rely too much on ‘insight’ and undervalue methodology, validation, and the law of big numbers. Balancing the two approaches means taking the best of both worlds.

This balancing is like Yin & Yang: there are various forces that are opposites in qualities and nature, but one cannot exist without the other. Shadow needs light, there is no down without up, no inside without outside. Likewise, data is worthless without theory, models don’t work without understanding their context, numbers need creativity to make something better, new, or remarkable. Figure 1 illustrates the opposite forces in the marketing intelligence context.

Figure 1: The Yin & Yang of marketing intelligence

Perhaps something multivariate?

Let me share some examples. A while ago I was approached by a client (a consumer service) that asked me to have a look at their NPS report. They had the feeling more analysis was needed (‘perhaps something multivariate?’). They observed some regions had lower NPS scores than others, but they couldn’t figure out why. I had a look at the report and the data and concluded they had already firmly squeezed the data. I couldn’t think of any analysis that would add to understanding their main question. A little bit of browsing through the verbatim, however, quickly gave me an impression what the main issue was.

It turned out to be under-staffing: most comments were about last-minute cancellations, longer waiting times than usual and sloppy, rushed service. The region managers quickly confirmed this insight. Above, the lower NPS scores neatly lined up with times of under-staffing.

The reason the real issue didn’t come out of the analysis was—in hindsight—quite simple: there were no questions about staffing. But it was played back structurally by unhappy customers. As happens frequently in surveys, the open-ended questions (the ‘soft’ data) are regarded inferior to the closed questions (the ‘hard’ data). But quite often, the ‘why’ (context) behind the ‘what’ (numbers) is hidden in the soft data. The next tool or analysis will not reveal these gems.

Separating the wheat from the chaff

Once I was involved in a project in which we had behavioural data of over a 1000 people that had indicated they wanted to switch energy providers. These individuals were part of a panel and agreed to have a piece of software installed which tracked all the internet behaviour (URLs + Google searches).

A couple of months of internet data of one thousand consumers is an enormous pile of data. Most of it, however, has nothing to do with searching for a new energy provider. So, how do you determine what is relevant and what is not? An algorithm could be trained to separate the wheat from the chaff, but it must be fed by a human first. Someone must tell the system what is related to your topic and what is not. It’s doable and potentially valuable but the point is: it will always be a very subjective process.

Algorithms are only Yang

There are plenty more examples, but I hope these illustrate the need for balance in your marketing intelligence efforts. Especially at this day and age it is tempting to put your trust in data and algorithms. Algorithms can do some things much better than humans ever could. But humans can do some things that algorithms just can’t. And that’s not likely to change for quite some time.

Comments are closed.