Cross the River by Feeling the Stones.

Trial and error are often the path to success.
Sometimes you just have to try something and learn as you go. A few years ago, I conducted research on concepts for a new healthcare-related service. Most of these ideas were quite well developed. However, while we were designing the research, the client mentioned one idea they had that was so ‘out there’ that they didn’t quite know how to describe it. In fact, they even struggled when discussing it amongst themselves. That being the case, the client felt we should hold off on testing this idea.
I pushed back a little, pointing out that squeezing in one more concept would not be a big deal. I said, “look— just write something up. It doesn’t matter how bad it is. We’ll show it in the first group, see what people say, adjust accordingly, and then show it again to the next group. By the time we get to the end of the research, maybe we’ll have something.”
The reaction to the concept in the first group was primarily confusion. But, after some discussion, the participants began to understand the idea and made suggestions as to how to better describe it. We took this feedback, edited the concept, and showed a revised one in the second group where it did a little better.
As the research went on, we continued to adjust. By the final group the participants understood the idea fairly well, but still didn’t quite know what they thought about it. Afterwards, the client felt they had a much better understanding of what their idea was, and what they needed to do internally to continue to develop it. About a year later we showed fully developed descriptions which performed very well among individuals in certain consumer segments.
Just taking your best shot and adjusting along the way can be nerve-wracking, but it’s also effective— iteration is often our greatest teacher. We live in a staggeringly intricate world, and many systems and situations are so complex that they simply can’t be intuitively understood. It’s naïve to think you can design a perfect approach up front. Don’t’ fall into the trap of thinking that knowledge and experience alone will lead you to an optimal solution. This principle applies to all aspects of our lives, and certainly to marketing and market research.
Here are some examples of how a trial-and-error approach can be used in marketing and market research:
  • Some of my clients routinely optimize marketing tactics like service bundles, promotions and banner ads by introducing something in a limited geography or for a very short period. They see how it perform, tinker with it a bit, and then broadly introduce the optimized tactic.
  • When I write a discussion guide, I make a point of including multiple possible ways to ask questions, and several back-pocket exercises to be used if the primary exercises don’t work as hoped. It’s not realistic to think that everything I put into the guide is going to elicit the desired information, and it’s important to be ready to turn on a dime.
  • When screening research participants, give yourself plenty of time for recruiting. It’s likely you’ll learn some things during the first few days of screening that will make you realize you need to make some changes to either the research specifications or the questionnaire.
  • When designing research, consider breaking it into multiple phases. Depending on your objectives it might be valuable to have time to step back, think, adjust and do some more research. A phased approach can allow that.
I’ll leave you with one final thought. To learn from trial and error, you need to be willing to fail. So, it’s important to manage expectations. Also, remember to not let yourself get too discouraged when your first try at something goes very badly. Remind yourself that it’s all part of the process. Early failures are the price of ultimate success.

Pick Losers, Not Winners.

Once, as a brand manager, I was involved in qualitative testing of four advertising directions for an upcoming campaign to identify the strongest one for full-up production. Unfortunately, no clear winner emerged. After the final focus group, we agonized about what to do. Finally, the moderator piped up: “You people are crazy. What did you expect? Did you really think such a small number of consumers could provide that kind of guidance?” He went on to point out that one ad was clearly a dog and one was mediocre, leaving us with two that delivered on all aspects of the creative strategy. He told us, “You’ve got two good options. Just make a decision. As a group, you’re fully qualified to make the call.”
This was a good lesson for me, one that I have taken to heart ever since. Instead of trying to crown a champion, eliminate the losers and select a ‘winner’ from what remains.
I conduct a lot of qualitative research that involves showing creative stimuli. These include things like package designs, advertisements, and new product concepts. It’s common for clients to set identifying the strongest execution as a goal for this type of research. This is an understandable objective, but often not realistic.
Picking the single best performer among several possibilities requires both precision and accuracy to have confidence in your results. Qualitative research, much as I love it, isn’t great for either of those things. Not only are the samples too small, but the research approach doesn’t lend itself well to these goals. The sample is not designed to be balanced and projectable. And the data gathering methods, which are informal and exploratory, don’t produce the kind of consistent data needed for accuracy and precision.
As we all know, qualitative research shines when your goal is to explore foundational issues and develop hypotheses for further testing. It is that additional testing, particularly if it is properly designed quantitative research, that will provide accurate, precise, statistically reliable data.
While qualitative research usually isn’t up to the task of identifying the single strongest option, it’s more than adequate for eliminating the weaker ones. In addition, it can provide detailed and nuanced insights about each execution to inform the client’s process for making a final decision. That way, clients should feel confident in their ability to choose their winner based on the research.
If clients feel they must select a winner, include such evaluative criteria in the discussion guide as relatability, sense of urgency and main message playback. Also, have clients identify any non-research-based criteria that can also be considered. These could include things like strategic considerations, competitive environment, consistency with past marketing activities, etc. You might also want to consider a hybrid approach in which the qual informs more in-depth quantitative insights.
Regardless of your approach, it’s a good idea to agree – beforehand – on what the tiebreaker criteria will be for final selection. And be sure everybody understands the limitations of qualitative research in picking winners when testing marketing concepts and creative stimuli.