Things Look Clearer with the Right Lens.

The Most Important Research Tool You’re Probably Not Using—Redux!
Recently, I was involved in a research study involving patients with familial hypercholesterolemia. FH is a genetic disorder characterized by extremely high cholesterol, and is generally treated with statin drugs. One key topic we struggled with was non-compliance—patients not taking their statins. Our most useful tool in finally understanding this issue turned out to be mindset models—perceptual lenses. One surprising finding: two previously undiscovered user segments emerged, each clearly defined by a cluster of mindsets.
After reviewing the transcripts and screening data, we noticed additional behavioral and demographic similarities among the people in each segment. So, by listening for mindsets, we were able to identify two distinct – and possibly very important – segments for further study, and significantly improved our understanding of compliance. This speaks to the power of mindsets as a market research tool, and that’s why I’m reposting one of my most viewed newsletters – somewhat edited and shortened:
 The Most Important Research Tool You’re Probably Not Using.
Of all the posts I’ve written, the ones that get the greatest response are those on mindset models. This isn’t surprising—I like these tools for the same reasons as everybody else. They’re powerful, intuitive, quick to learn and easy to use. And yet, when I talk to marketers and fellow researchers about them, I often get a blank stare. So here’s why everybody should know about mindset models.
A lot of analytical tools are extremely complex. Examples include the Fogg Behavior Model, the Utility Trade-off Model and the Keller Brand Equity Model. I make frequent use of all of these  — they’re invaluable. However, they take a long time to internalize and understand, and longer still to master their use.
Not so with mindset models. They can be described in a few minutes and, once explained, you’ll immediately know how to use them. It’s easy to build an extensive set of them; I’ve identified over 30 mindset models that I use regularly.
So, what do I mean by ‘mindset?’   My definition: a mindset is a system of perceptions and attitudes formed by circumstances, intentions, experiences and needs. Put simply, a mindset is how somebody perceives or relates to a situation. When you understand somebody’s mindset, you’re seeing the world from their point of view. That’s a powerful thing to be able to do, particularly if you’re struggling to understand your data, or if you want to look at your data with or a fresh set of eyes. A good metaphor for mindsets is lenses—when you look through the right one, everything comes into focus.
In addition to the other mindsets I’ve written about, here’s another one: possibility versus feasibility. Individuals who have a possibility mindset see situations in terms of what could be, paying little regard to what’s realistic. Those with a feasibility mindset focus on whether or not something can be done, and how.
Why is this important? Because, if you’re assessing new product ideas, mindset will profoundly influence interest level. For instance, early adopters of new technologies tend to have a mindset of possibility—they get excited about the promise of a new idea, and don’t get hung up on why it might not work. So, when communicating with early adopters, marketers should account for this. On the other hand, later adopters tend to have a mindset of feasibility, focusing on potential problems and shortcomings. That’s why they choose to wait to try, why marketers should probably not focus on selling new technologies to these consumers, and why communications aimed at them might do well to focus on a product’s proven track record.
It’s also important to be aware of these mindsets when conducting ideation or co-creation sessions—participants who can’t leave behind a feasibility mindset are unlikely to be able to contribute. So, if you’re recruiting consumers for such an event, a few attitudinal screening questions can greatly increase your chances of success. It’s also a good idea to include exercises to foster possibility thinking.
Making use of mindset models is easy—just get into the habit of listening for the role mindsets are playing in perceptions, attitudes and behaviors. If you’re having difficulty understanding people’s opinions, think about what their mindset might be. Remember that every mindset has its own logic, and if you can identify the operating mindset, points of view will start to make more sense. I have a checklist of questions I routinely ask myself to help focus on mindsets:
  • What mindsets are evident?
  • How are they shaping perceptions?
  • What behaviors are they driving?
  • What can they tell us about segments?
  • What are the implications for branding and tactics?
Refer to this list when conducting research, reviewing data, developing brand strategies or tactics, and when arguing with your significant other. After a while, you’ll find you’re in the habit and won’t even need to think about it. Then you’ll be a Master of Mindsets!
For more information an specific mindsets, click these links to the TMRA website:
Original Post:  https://thomahttps://thomasmrich.com/2020/01/09/the-most-important-research-tool-youre-probably-not-using/
Scarcity:  https://thomasmrich.com/2019/09/25/why-being-poor-is-so-expensive/
Maximizing & Satisficing: https://thomasmrich.com/2019/08/21/when-good-enough-is-good-enough-2/
Evolutionary Psychology: https://thomasmrich.com/2019/10/28/on-the-benefits-of-generosity/
Morality:  https://thomasmrich.com/2020/10/09/the-elephant-in-the-room/
Loneliness:  https://thomasmrich.com/2020/07/13/remember-eleanor-rigby/

Here’s Looking at Me, Kid.

How we see ourselves is central to how we decide among alternatives.
Last summer I wrote a post about my Utility Tradeoff Model. Since then, quite a few of you have asked me to say more about the various tradeoff currencies. In particular, people really want more information about self-esteem. This makes sense, as qualitative research is indispensable to understanding self-image issues, and has been since its beginning. Remember the Betty Crocker cake mix story in that Utility Tradeoff Model piece? This is often where qualitative researchers earn their pay.
I’ve identified nine specific factors that seem to drive self-esteem and influence decision-making, and have trained myself to listen for them when conducting research.
Knowledge and skills. The things we know and can do are fundamental to how we see ourselves. If you know how to bake a cake from scratch, buying a cake mix could compromise your ability to activate and demonstrate that skill.
Possessions and wealth. I know, it’s shallow. But nobody is immune to seeing themselves through the lens of their stuff. If you just built an expensive new kitchen, you’re going to want to have people over for dinner instead of eating in a restaurant. Maybe you’ll even bake a cake.
Affiliation. We all want to be like Mike. The people, places and brands we associate ourselves with matter a great deal. There’s a reason GEICO pays that adorable little lizard so much money every year.
Altruism. Acts of sacrifice tend to make us feel good about ourselves. That’s why we’re so eager to pay a premium for eco-friendly products, and why nonprofits routinely publicize their donor lists. We also love to suffer for our children, despite how much we complain about it (and how ungrateful they are).
Nurturing. Much like sacrifice, the desire to feed, protect and care for those who depend upon us is hard-wired into our emotional makeup and profoundly influences our self-image—you can’t fight 60 million years of primate evolution. The impulse to nurture is as powerful as any human beings have. That’s why so many brands promise that using their products will make you a better parent.
Morality. Our moral code defines us as much as anything, and the morality mindset is probably the dominant lens through which we see the world and ourselves. For more about morality, read this recent blog post.
Control. We all want to exert influence, whether over ourselves, our circumstances or our environment. That’s one of the fundamental appeals of smart devices. It’s also why so many consumers prefer to buy multiple, separate OTC cough and cold products rather purchasing a single multi-symptom remedy.
Uniqueness/individuality. We all want to feel special, despite our crushing ordinariness. We particularly want to believe our children are unique. This is often at the heart of the appeal of home décor services, fashion consultation services, tutoring services and highly customizable products.
Ability to add value. Just about every premium haircare brand offers a full line of products to meet all of its consumers’ needs. However, only a very small percentage of a brand’s consumers use that one brand’s products to cover all their haircare needs. And why do jarred pasta sauce users insist on doctoring the product instead of serving it as-is? Because, by creating a customized product or array of products, users become part of the creation of the product’s benefits, thus generating additional value over and above the value provided by the products themselves. This act fosters considerable pride. If you’ve ever heard enthusiastic cooks describe the process by which they curated their collection of knives, you’ll know what I mean.
Obviously, this is my self-esteem checklist, based upon my own experiences as a researcher and a human. I’ve found it to be an indispensable tool, particularly when analyzing research results. However, I’d love to get the perspective of others on this topic. How do you look at self-esteem when conducting and analyzing research? What trends and consistencies have you noticed? What techniques do you use to spark discussion of self-image? Please get in touch and let me know.

Lies Can Be as Interesting as the Truth.

How to be a human lie detector.

Several years ago, my friend Jack, who’s an FBI agent, described himself as a ‘human polygraph.’ When I asked him how he did it, he said, “there’s no magic. I just pretend I have a terrible memory.”
It’s easy to over-focus on getting truthful responses from research participants. Not to say that truth isn’t important, but it’s good to remember that sometimes it’s valuable to let people lie to you. When they provide dishonest answers to your questions – whether deliberately or unintentionally – instead of describing the world as it is, they’re describing the world as they would like it to be, or telling you how they would like to see themselves.
This is particularly valuable learning when dealing with sensitive topics. People can be reluctant to tell you how they really feel about things, and instead tell you what they think you want to hear or what they think is socially acceptable. In other words, they’re providing clues about what embarrasses them, or compromises their self-esteem, or makes them uncomfortable. This can be key to uncovering hidden motivations.
So, when conducting research and analyzing your data, it’s important to be able to distinguish truth from falsehood. Over the course of my career, I’ve identified three invaluable techniques to tell if research participants are being honest. To be clear, these methods aren’t perfect, but I’ve found them to be pretty reliable
1)  Ask the same question repeatedly.
This is the one I learned from G-Man Jack. To determine how truthful an answer is, ask the question multiple times and in a variety of ways. When people are telling the truth, they have no difficulty sticking to their story. But when they’re dissembling, their replies can be inconsistent, and the story tends to wander closer to the truth the more you press them.
Sometimes I’ll ask the question exactly as I asked it in the past, sometimes slightly differently. Such as:
  • Why do you use Tide?
  • What are your reasons for using Tide?
  • Why do you think so many people use Tide?
  • Why don’t you use Gain?
  • Why do you love Tide?
  • What is the best thing about Tide?
  • What are three things you like about Tide?
  • How would you describe people who love Tide?
  • What do Gain users not understand about Tide?
  • What don’t you like about Gain?
  • How do users of Tide differ from users of Gain?
  • And so forth. (I can do this all day.)
Don’t be uncomfortable asking a question multiple times. If a participant calls you on it, own up … “yup – you caught me!’” Then explain that asking a question multiple times is a common research practice.
Pro-tip: Put your question variations in your moderator’s guide. It’s also a good idea to manage the expectations of your client: make sure they know that you’re going to use this probing tactic. Otherwise, they might think you’re not paying attention to the participants’ answers.
2) Pretend to forget.
Deliberately misremember what respondents have told you earlier in the conversation. This gives them the opportunity to correct you.
“You mentioned earlier that you don’t care for Tide. Does that mean you prefer Gain?”
If the participant doesn’t correct you, you just learned something. If they do, apologize and ask them to remind you of their previous answer, and see how closely this answer matches the earlier one.
3) Biometrics
A few years ago, I was conducting research among endocrinologists, exploring the idea of active patient involvement in treatment decisions. When we showed the doctors concepts for tools to increase patient participation, nearly all said they were highly interested in the idea.
However, we didn’t just have to take their word for it. We were also capturing biometric data during the interview. Specifically, galvanic skin response and facial coding. For some of the participants, the GSR showed a surprisingly strong physiological responses to the descriptions, and the facial coding showed a stew of negative emotions—specifically anger, fear and contempt. So the biometric data was in stark contrast to what those participants were saying. The big insight: while some doctors might dislike the idea of patient involvement in medical decisions, they may not feel it’s socially acceptable to say so. This led the client to take a decidedly different approach to marketing the patient involvement tools.
One additional thought—you can use these techniques in your personal life as well. Significant others particularly enjoy being questioned this way.  And your kids will love it too, although they might resist being wired up to biometric equipment.
Continue reading

Putting Yourself in a Box.

The Underrated Power of Oversimplification. 
When I left consumer packaged goods brand management and started working in qualitative research, I received a crucial piece of advice from my uncle. He had asked me about my plans. I gave a detailed explanation of the kind work I planned to do and the type of client I intended to pursue. He nodded and said, “that sounds great. But you need to come up with something much shorter if it’s going to make an impression.” “How short?” I asked. “Ten words,” he replied. I pushed back, saying that such a short description wouldn’t do justice to my abilities. He replied that this was unimportant. What mattered was giving people an explanation that they could easily understand, remember, and see as relevant to themselves. “So what if it’s oversimplified? That’s your problem, not theirs.” In fact, he added, if it’s not oversimplified, it’s probably too long and too complicated to be effective.
So, taking my uncle’s words to heart, these are some overly simple ways I describe myself to people outside the world of marketing or market research:
“I get people to talk to me about stuff.”
“I help clients see their business through their customers’ eyes.”
Here’s another thing I’ve learned; it’s a good idea to describe yourself in comparison to ‘something familiar. People need to be able to categorize you easily—put you in a box—even if you don’t precisely fit into that category. You have to let them think about you in terms that are relevant to them, not you. With that in mind, here are some oversimplified ways I describe myself to people inside the world of marketing and market research:
“I’m a focus group moderator.”
“I’m a qualitative researcher.”
These descriptions go directly to concepts that are familiar to this audience. While my expertise with qualitative research tools goes well past just conducting focus groups, ‘focus group moderator’ is a convenient shorthand that’s often used to describe my profession. Similarly, while I certainly am a qualitative researcher, the type of expertise and insight I provide to my clients goes far beyond just qualitative research skills. But, again, ‘qualitative researcher’ is a well-understood frame of reference, and so it’s a good place to start.
So, the point here is that mere simplicity, while laudable, may not be sufficient by itself. Over-simplification might be necessary to make messages memorable and effective.
This principle also applies to branding and marketing challenges. It’s common to encounter product concepts or advertising prototypes that are overly complicated, and it’s no secret as to why they test poorly. But I’ve tested research stimuli that were simple and straightforward and still didn’t resonate. Only when we oversimplified the message did the respondents react positively. For instance, I once tested concepts for a new high-fat baking chocolate. The first concepts – which were fairly brief but fully accurate descriptions of the product – were greeted with confusion; the participants couldn’t grasp the idea. However, when we showed a concept that said “it’s like chocolate mixed with butter, the respondents became extremely interested. This really wasn’t an accurate description of the product at all, but it offered the participants a familiar and appealing frame of reference, which made them want to learn more. The oversimplified description was the one that resonated.
Something that makes this principle of oversimplification challenging is the fact that marketers and market researchers tend to be highly rigorous thinkers, and oversimplified messages make us uneasy. So, embracing this concept may require you to go against your nature.
One more thing: an emotional component to a message is crucial to engagement and memorability.  The simpler a communication is, the less people have to work to understand it, and the easier it is to find an emotional hook. In other words, the less you have to think, the more you can feel.
So, go ahead—put yourself in a box. Get comfortable with over-simplification. It’s often the path to the most effective messages.

Gather ‘Round the Campfire!

I once heard a historian remark that maps are like campfires: everybody gathers around them because they bring simplicity to the complex, and show us how to get where we’re going.
There’s no shortage of wisdom on the basics of qualitative research guides, but there are a few concepts that rarely get discussed. Regardless of the type of qualitative being conducted, ‘the guide’ is the roadmap. Depending on the methodology, it goes by different names: moderator’s guide, discussion guide, topic guide, interview guide, or activity guide. No matter—the same principles apply. So here are a few little-thought-of, but crucially important, ideas that must be understood and kept in mind when creating effective guides.  Keeping them in mind can lead you to breakthrough insights.
The guide must allow key topics to surface organically. I once conducted research for a new brand in an existing medication category that wanted to address the problem of needing water when taking tablets. The ad agency proposed starting the focus groups with the advertising prototypes created for the research, with no time spent discussing the participants’ category experiences and attitudes. The research team pushed back, believing that some initial discussion around category pain points could be enlightening. Fortunately, that’s what we did, because the big finding from that part of the discussion was that needing water never came up on its own. When I finally prompted for it most participants agreed it was a bit of an issue, but that was as far as they were willing to go. Ultimately, the client realized that their upcoming marketing program was oriented around a problem that barely existed, and they were able to revise their approach.
The point is that when and how discussion points arise can be some of the most valuable learning gained from qualitative. So it’s good practice to allow things to come up on their own whenever possible. This will allow you to observe when something arose, whether it did so with or without prompting, and, if unprompted, what led to the topic arising. What vocabulary did the participants use when bringing it up? If it had to be brought up by the moderator, do the participants have any thoughts as to why? Clearly identifying topics in the guide that will not be prompted will allow these conversations to happen, leading to key insights.
The guide must be created collaboratively. The purpose of market research is to mitigate business risk and to guide decisions. To do that effectively, all stakeholders must be involved in designing that research. This could include internal and external researchers, the brand team, R&D, various creative agencies and senior management. All stakeholders must fully buy into the research objectives and approach, meaning they must have input into the guide. The creation of a guide is often an iterative process in which the researcher gains understanding while clients are able to focus and refine their thinking. Sometimes clients go into research with a fairly good idea of what they want to do, but it’s not fully fleshed out. There’s nothing wrong with that, but the process of collaborating with the moderator to write the guide is the perfect opportunity to figure all that out. The irony here is that, if this process is fully collaborative, by the time the research arrives, everybody knows the guide so well that nobody, client or moderator, needs to look at it very much.
The guide must be adaptable. As researchers have been saying since the beginning of time, ‘it’s a guide, not a script.’ This means more than simply that the moderator isn’t going to read every question exactly as written, and in the order presented. While the guide must include all of the issues to be explored and provide a rough plan for how that will be accomplished, it must also allow for a good deal of flexibility. Topics will not necessarily come up in the expected order, some questions will fall flat or confuse the participants, some exercises will not be successful and unexpectedly interesting new topics might surface. Therefore, the guide should provide a variety of potential approaches for the discussion, not all of which might be used, and should allow the researcher to adjust depending on the flow of the discussion. It should also provide alternate orders for the various guide sections.
So, to sum up, if you want maximize the possibility of uncovering groundbreaking insights, make your guides organic, collaborative and flexible.
Note:  If you would like to read about even more basics on creating effective guides, a comprehensive list can be downloaded in the article contained at the link below.