Imagine the following scenario: You’re the marketing manager for a leading brand of household products, and you’re considering a new line of eco-friendly, multi-purpose cleaners. You’ve studied market trends, measured your competition, and conducted exploratory focus groups and consumer interviews. In the process, you’ve identified a number of essential attributes for your new product, including key features and benefits, scent varieties, package design, color-schemes and graphic elements. Once you’ve combined all the top ideas from your internal teams and creative agency, you end up with seven possible design options for each of six distinct product attributes. That’s more than 100,000* possible combinations to sift through! Which of those combinations are most likely to resonate with consumers and lead to in-market success?
When faced with so many options, your first step might be to use your best judgment to select a handful of product versions and submit them to a wave of monadic concept tests. In a monadic test, each version is presented to a separate panel of representative consumers, who are asked to rate the proposed product concept on a number of dimensions (such as purchase intent, uniqueness, or relevance) before everyone’s scores are averaged to identify the most promising version. The methodology behind monadic tests is well understood, and the technique is very effective, but it only allows you to explore a very tiny fraction of all possible alternatives. You need to pre-select the product concepts that you believe are the most promising, and that pre-selection is necessarily biased and often politically charged. You’re most probably missing out on your best options.
Modern choice-based conjoint analysis can help: In that type of research, each respondent is presented a sequence of product alternatives and asked to select their preferred version in each of those side-by-side comparisons. The collected responses are then used to build a choice model—typically a hierarchical Bayesian logistic regression model—which gives the probability of the respondent choosing one concept over another as a function of the value of its attributes. Unlike monadic concept tests, conjoint analysis makes it possible to explore all values for all attributes, but the models that result from this type of analysis are often too simple to capture the holistic nature of how consumers react to new consumer products. In most real-life situations, there are important synergies and negative interactions between attributes—especially when aesthetic elements are involved—and those models are generally not good enough to reflect them.
We developed a new approach to address those limitations. It’s based on the principles of genetic evolution: We start with a quasi-random initial set of product versions, present them to respondents and, based on their feedback, select the better performing ones as parents for breeding purposes. The algorithm then uses genetic crossover to combine traits from two parents and breed new product candidates (offspring); mutation to introduce traits that were not present in either parent; and replacement to eliminate poor-performing members of the population and make room for the offspring.
Step by step, in survival-of-the-fittest fashion, the population of new product concepts evolves to reflect the preferences of the respondents, and we end up with perhaps four or five top concepts that can be further investigated. The genetic algorithm is essentially a search and optimization process that is guided by human feedback every step of the way and acts as a learning system. It doesn’t require modeling complex human behavior—and solving the difficult mathematical problems that come with such models—and yet it implicitly accounts for all that complexity.
The Nielsen Optimizer service is based on this technique, and we’ve used it for thousands of client projects already, to great success. In fact, in an early comparative study, we’ve measured that product concepts identified by Nielsen Optimizer generate on average a lift of 38% in forecasted revenue, compared to non-optimized (best-guess) concepts. We typically need 500 to 1,000 respondents to conduct a Nielsen Optimizer study and quickly reduce a set of 100,000 potential product versions down to its most promising candidates—which can then be studied in greater detail with monadic testing.
We will share more details on the genetic algorithm behind Nielsen Optimizer, as well as relevant case studies, in a future edition of the Nielsen Journal of Measurement. While we have more work to do to improve the respondent interface, fine-tune the analytics systems and shorten delivery time, there’s no question that this technique is already making it possible for brand managers to save time, explore more ground and bring their new products to market with much more confidence.
*76 = 117,649