The Never-Ending Battle Against Nonsense.

I recently came across something called Brandolini’s Law.  It was first stated in 2013 by Alberto Brandolini, an Italian software engineer, and it says that the amount of energy needed to refute bullsh*t is an order of magnitude greater than the energy required to produce it.  It’s also called the Bullsh*t Asymmetry Principle.  Mr. Brandolini is hardly the first person to notice this.  Winston Churchill is often credited with having observed that “a lie gets halfway around the world before the truth has a chance to get its pants on.”  (In a beautiful irony, it seems that this attribution itself is BS.  The remark was probably made by Cordell Hull, FDR’s Secretary of State, but people continue to credit it to Churchill.)
If Brandolini is correct that accurate information is at a fundamental disadvantage to nonsense, this has significant implications to market research.  I often find that one of the most important priorities I face when conducting research is avoiding jumping to conclusions.  I know my clients struggle with this as well.  It’s unsurprising that this is a challenge.  We are pattern seekers by nature, and so we look for explanations when presented with data.  Humans are also naturally uncomfortable with not understanding something, and will sometimes prefer a bad explanation to no explanation.  From an evolutionary standpoint, these traits have clear value.  However, in this modern world, they don’t often serve us well.  And exacerbating that problem is the fact that we’re usually under pressure to deliver findings and implications as quickly as possible—often on the spot.
With market research, rushing to judgement can work against you—particularly in the case of qualitative approaches, which allow us to watch findings accumulate over time.  If you’ve ever sat in a focus group back room, or observed an online bulletin board, you’ve had the opportunity to see data be created in real time.  Not only is it important to resist drawing conclusions before all the information is in, but, once you do have the data, it’s wise to give yourself some time to mull things over – psychologists call this ‘consolidation’ – before forming opinions.  Doing so too hastily can lead to poorly-thought-out implications and unsound recommendations.  And once these flawed ideas are articulated, they can spread like wildfire, and abandoning or revising them after the fact is nearly impossible – hence Signore Brandolini’s observation.
So, here are some practices I follow to avoid this problem:
  • I make a point of distinguishing very clearly in my own mind between the tasks of determining what I have heard and considering what I think it means.
  • While I’m conducting research, and for at least a few days afterward, I restrict myself to the first task, and hold off on the second. I strongly encourage my clients to do the same.
  • I’ll often schedule a debrief call with clients a few days after the research, the express purpose of which is to allow ourselves to engage in the second task.
  • I consciously give myself permission to change my mind about things in the days following the research.
  • I also strongly encourage all members of the research team to disagree with each other and me. As George Patton used to say, “If everyone is thinking alike, then somebody isn’t thinking.”
One final point. It’s important to bear in mind when looking at quantitative research results that the information you’re reviewing is incomplete.  As we all know, quant gives you a lot of ‘what,’ but not much ‘why.’  It’s qualitative that will provide the story behind the numbers.  So, before you start drawing conclusions based on quantitative data, try to work some qualitative information into your analysis.
I’d love to know your thoughts on this topic.  Feel free to email me or leave a comment on the blog.

Paging Secretary McNamara— Please Retrieve Your Fallacy At The Lost And Found.

Recently, I was reviewing data on keyword searches conducted on Google regarding qualitative research.  Here’s what I learned – a lot of people don’t seem to understand what qualitative research is, how it differs from quantitative, and how it can add value.  This is consistent with my own recent experience.  Many long-standing clients who are experienced research professionals increasingly find themselves defending the necessity of qualitative to their colleagues in marketing, top management, finance and corporate purchasing.  They’re constantly forced to respond to remarks like, “what can this stuff tell me that the numbers can’t?”
Whenever I hear about this, I think about the McNamara Fallacy –  named for Robert McNamara, Secretary of Defense for John F. Kennedy and Lyndon Johnson, and architect of Johnson’s escalation of the Vietnam War.  This phenomenon was described by pollster Daniel Yankelovich – a man who made his living quantifying things – in the early 1970s.  It describes a progression of thinking that starts reasonably, and ends up in near total dysfunction.
Step 1: Measure what can be easily measured.  Nothing wrong here —we should be quantifying what we can.
Step 2: Conduct an analysis that is either based entirely what can be measured, or that assigns estimated or arbitrary values to those things that can’t be.  Nothing inherently wrong here either, as long as you remember the limitations of such an analysis.  But it’s also risky, as it can easily lead to…
Step 3: Decide that what you can’t easily measure is unimportant.  Yankelovich says, “This is blindness,”  and, it will take you right to…
Step 4: Conclude that what you can’t easily quantify doesn’t exist.  Yankelovich calls this “suicide”.
This fallacy is named for McNamara because of his approach in Vietnam, in which he decided that objective, quantitative measures – most notably body counts – would be a proxy for success.  McNamara’s focus on these metrics told him the US was winning the war.  But by 1968, it was clear that the numbers were telling a misleading story.
The lessons here:
  • Numbers provide essential information. However, by themselves, they only tell what can be quantified – that something can’t be quantified doesn’t mean it’s non-existent, or that it isn’t important.
  • The numbers themselves must be questioned. Had Bob McNamara taken a closer look at the body count figures he was receiving by using tools that dig beneath numbers to help them tell a more accurate and enriching story, he might have interpreted them very differently.   It’s important to remember that every data point represents something: a person, an event, a memory, a perception, and so on.  If we are to truly understand the numbers in aggregate, and make good decisions as a result of that understanding, it is imperative that we spend time looking at those things the numbers represent.  This is where qualitative tools –  such as conversation, observation and creative exercises – can add so much value.
  • It’s valuable to remind ourselves periodically why we use metrics in the first place. We do this to simplify—to make something understandable that might otherwise be too complex to grasp.  However, we must be very careful in our selection of metrics, as every measure contains assumptions about causality.  McNamara and his staff assumed – obviously incorrectly – that there was a causal link between success and killing more soldiers than then enemy.
Today, it seems we may have forgotten the cautionary tale of Robert McNamara.  With so much data available to us – and it’s often of such high quality –  we can forget the power of data lies in the stories it tells and the humanity it describes.  And it’s qualitative tools that help us find those stories and that humanity.  Often, we are presented with qualitative and quantitative as an either/or decision.  But this is a false choice.  The two must work together to uncover the truth.  And so we – as those who are responsible for interpreting data for the purpose of informing decisions – must always remember to dig deeply into that data and the assumptions that underlie it by creating research approaches that meld numbers and stories.

The Electrodes are Coming !

I recently attended NeuroU 2019, and It was a fascinating two days during which I immersed myself in the world of neuromarketing.  One key thing became abundantly clear; biometric data is about to become a thing, and marketers better get ready for it.  Here are two key takeaways I think you’ll find interesting.

Biometrics Can Provide a Valuable Augment to the Data We Already Collect

When we combine the types of data typically provided by marketing research—survey responses, syndicated data and qualitative learnings—and combine them with sources that measure physiological response to research stimuli, we can add valuable insight to our findings.  These physiological metrics can document respondent attention and engagement.

For example, we show a visual stimulus to qualitative research participants such as a print ad, a webpage, a package mockup or a retail shelf set.  In addition to discussing the stimulus, we could augment the findings with some eye tracking which would tell us what people actually looked at, when, and for how long.  “Hold on,’ you say, ‘eye tracking has been around for decades; what’s new and different about that?’ Now, we can also add in measures like heart rate, pupil dilation and galvanic skin response so we can determine which elements correlate with a physiological response.  This informs us as to what elements were actually engaging, and which ones merely elicited attention but no real interest.

Perhaps the stimuli are more dynamic – a TV commercial, or shoppers explore a retail environment.  We can now measure EEG response continuously during the exposure period, or gather facial coding data.  Both can provide significant insight into the nature of an individual’s emotional responses to a stimulus.  When we combine this information with traditional quantitative measures (such as recall and persuasion) and insights gathered during qualitative discussion, we can substantially increase our understanding of how consumers are responding to messages and environments.

The Hardware and Software Are Pretty Much Ready for Mainstream Usage

While biometric data was always interesting in theory, significant logistical challenges made it impractical for typical applications. The software was not user-friendly, the hardware clunky and temperamental, and costs usually prohibitive.  Over the past few years, suppliers have devoted significant resources to address these challenges, and now offer turnkey hardware and software suites that can provide reliable data at an extremely reasonable cost.

The upshot—it might be time to start dipping your toe into this end of the pool.  Used appropriately, biometric data has the potential to be a major problem-solver for researchers.

On the Value of Humility.

My grandfather once asked me, “who do you want to be—the guy who has all the answers, or the guy who has all the questions?”  When I said I wanted to be the guy with the answers, he laughed and said, “in that case, you’re never going to learn a damned thing.”  At the time, I doubt I understood his point.  And this wasn’t the last time somebody would try to make this point to me.
Before getting into qualitative, I spent several years in CPG brand management, working in several marketing groups over that period. Culturally, they all had one thing in common:: admitting not knowing something about your brand or consumer was very risky.  This was seen as evidence that you weren’t fully immersed in your business.  Most of the consumer research in which I was involved was highly confirmatory—we were simply looking to verify what we thought we knew to be true.  At one point, a moderator with whom we were working was admonished not to ask questions that were exploratory in nature, and just stick to the discussion guide he’d been given. He asked, “if you guys know so much about your consumer and what she wants, why are you doing research at all?”  The sarcasm in his tone was unmistakable.
Shortly after getting into qualitative, I was conversing with another moderator, the late Jan Beehner-Chandler.  She made one remark I’ve never forgotten … “you can’t tell some clients and researchers anything.  They think they already know everything, so they won’t listen, and so most of the things they could learn from research go right by them.”
Eventually I got the message. Thinking you have all the answers is antithetical to insightful research, because the most important element to uncovering new information is humility.  Without humility – the overt acknowledgement of one’s own shortcomings and ignorance –  there can be no curiosity, no ability to question and research and learn.  So, whenever I embark upon a new study I always begin from a place of humility.  I list out:
  • What I believe to be true beyond reasonable doubt
  • What might be true but I don’t know for sure
  • What I know that I don’t know
  • Some thoughts on what I don’t know that I don’t know.
I use this to inform my initial conversations with my client, and to develop research objectives, discussion guides and research stimuli.  And all of this comes from consciously acknowledging that there’s stuff I don’t know.

So You Ask: What’s a Hybrid?

Here’s a question I get asked a lot … “do I prefer doing online or face-to-face qualitative research?”  The answer I always give … “yes, I like them both”.  And here’s a closely-related question that I also often get … “what’s the right approach for a specific research study, online or in-person?”  And I’ll frequently give the same answer … “yes, let’s do both.”   For in fact, often the best approach is to use both.   In the past, our qualitative research toolbox was pretty limited. We had focus groups of various sizes, in-depth interviews (IDIs), and maybe some telephone interviews.  Now we have a dizzying array of tools available to us, and often the best way to get the most bang out of a research buck is to combine them.
The guiding principle here is that online (OL) and face-to-face (F2F) research tools have very different strengths and weaknesses.  So, by putting them together, we can create an approach that yields far more insight than any of its individual components can alone.
  • F2F research approaches offer a high level of engagement. I don’t care what anybody says, no online approach can offer the same level of deep, personal connection that in-person research can.  Having everybody in the same physical location also allows a high degree of flexibility.  Because of all this, F2F is where new and unexpected insights are most likely to come to light.
  • OL is highly time and travel efficient. It can also be a big problem solver for low incidence recruits, as it allows you to recruit out of a national sample.  What’s more, it’s a highly efficient way to get a lot of the preliminaries out of the way—introductions, basic attitudes and practices, etc., and it’s also a good way to test a large number of ingoing ideas or hypotheses very efficiently and discard the weaker ones.
You can approach ‘hybrid’ in two ways. Online can serve as a precursor to F2F research; by creating relationships with research participants online, you can set yourself up for maximum quality face to face interactions.  Because you’ve already established rapport with participants, you’re now positioned to have an especially candid, productive discussion. Conversely, the opposite can also true.  You can start a study with an in-person phase during which you develop some initial hypotheses, and then test those hypotheses in a variety of ways very efficiently online.
So, with all this said, the question we should always be asking ourselves … “how do we best combine OL and F2F tools to provide the richest, most insightful research?