Evidence-biased marketing
When you test two brand positionings in qualitative research and see a clear preference for one, you have that most prized and elusive marketing commodity: evidence. You have something to take to the board to back up your recommendation, even if it implies the kind of internal changes that make HR and operations people shiver.
Much the same applies if your chosen research methodology had been quantitative – in fact, from the evidence point of view, given the metric-focused make-up of the typical boardroom, it’s generally more persuasive: something about sample size and the ability to quote a crisply precise “68.3% agreement with statement 2.ii”.
Things get somewhat trickier when you have followed best practice and tested both qualitatively and quantitatively, only to see one methodology contradict the other. Qual favours positioning A. Quant calls it for B. What kind of evidence are you looking at then?
Well, you have evidence, were it needed, for something you’ve long known but conveniently set to one side: research is not truth. We hanker after certainty in our messy, intuitive discipline, and that encourages us to see it where none exists. Polarity is therefore a painful but helpful reminder of the limits of asking busy people to give abnormal consideration to trivial brand constructs in order to help us make corporate decisions.
Beyond that, you’re almost certainly looking at evidence of the mutability of human mood, as carefully recruited respondents give conflicting responses according to the research situation. Over tea and biscuits on a sofa, with a moderator calmly unveiling the concepts, a more expansive route might find favour. In the clipped, binary setting of the online questionnaire, completed with a dozen distractions going on in the background, a more transactional mindset holds sway.
Or perhaps equivocation is a signal that neither positioning was particularly amazing and that consumer indifference trickled either side of the methodological watershed in much the same way as in the US presidential election, where one candidate took the popular vote and the other took more states but what people were really saying was: “America, is this the best you can do?”
In which case, you have some tough decisions to make. Fine-tune? Use the findings for development, not winner-picking? Go all out for something completely new? All of which will take precious time and imply, no doubt, further research.
Before you go down that arduous path, you might decide to do some forensics with the findings and look for evidence of bias. And you’ll probably find it, since research is rarely innocent of it.
Perhaps the qualitative moderators were too liberal with their interpretations on one of the routes; or perhaps, looking again at the stimulus, you see that one was more lucidly brought to life than the other. On the quant side you might probe more deeply into how the questions were framed, since this is a notorious cause of response variability (see panel).
There is another kind of bias you should consider: your own. The evidence for its existence will be visceral – the elation you felt, the little shiver of excitement, when the route you always favoured made it through in, let’s say, the qual. And the corresponding disappointment when that advantage was nullified by the quant.
Hold on to that bias: it could point the way ahead, especially if shared by the wider marketing team. Here’s why. There is a monumental journey between raw research statements and a positioning made real in everything from call-holding message to product design. The zeal of your team to translate it in the most inspired, imaginative way will be the biggest factor in eventual success.
That still leaves the board to convince, of course, and there is probably no other way through that than a deep breath and candour: “Our recommendation is for positioning route A. It is bold, trenchant and focused on the future. Although the research was balanced, with the more cautious route B performing just as well, we note that there were no gross negatives either way, and no reason not to raise our sights and go for what we believe in.
“You have my commitment that we, as a marketing team, will do everything to ensure that our recommended positioning translates into lasting competitive advantage – and if you need evidence for that, then look into my eyes.”
Even the most cautious HR people, the steeliest operations directors, the most metric-focused board members have been known to be persuaded by that kind of unequivocal passion. And so, come to that, have consumers.
This cognitive bias results in people giving very different answers to the same underlying question, according to the way it was put.
In a classic 1941 poll, 62% of people disagreed with allowing “public condemnation of democracy”, but only 46% of people agreed that it was right to “forbid public condemnation of democracy”.
In 1988, US academics Irwin Levin and Gary Gaeth showed that evaluations are more positive for ground beef described as 80% lean than 20% fat, and people would be more prepared to consider a medical procedure with a 75% survival rate than a 25% mortality rate.
A recent US academic study into ‘framing effects in surveys’ concluded that ‘writing good survey questions may seem deceptively simple. A vast body of survey design research suggests that even slight variations in wording, response options, and question order can affect responses.’