Concept testing

Concept testing (to be distinguished from pre-test markets and test markets which may be used at a later stage of product development research)[1] is the process of using surveys (and sometimes qualitative methods) to evaluate consumer acceptance of a new product idea prior to the introduction of a product to the market.[2] It is important not to confuse concept testing with advertising testing, brand testing and packaging testing, as is sometimes done. Concept testing focuses on the basic product idea, without the embellishments and puffery inherent in advertising.

It is important that the instruments (questionnaires) to test the product have a high quality themselves. Otherwise, results from data gathered surveys may be biased by measurement error. That makes the design of the testing procedure more complex. Empirical tests provide insight into the quality of the questionnaire. This can be done by:

  • conducting cognitive interviewing. By asking a faction of potential-respondents about their interpretation of the questions and use of the questionnaire, a researcher can verify the viability of the cognitive interviewing.
  • carrying out a small pretest of the questionnaire, using a small subset of target respondents. Results can inform a researcher of errors such as missing questions, or logical and procedural errors.
  • estimating the measurement quality of the questions. This can be done for instance using test-retest,[3] quasi-simplex,[4] or mutlitrait-multimethod models.[5]
  • predicting the measurement quality of the question. This can be done using the software Survey Quality Predictor (SQP).[6]

Concept testing in the new product development (NPD) process is the concept generation stage. The concept generation stage of concept testing can take on many forms. Sometimes concepts are generated incidentally, as the result of technological advances. At other times concept generation is deliberate: examples include brain-storming sessions, problem detection surveys and qualitative research. While qualitative research can provide insights into the range of reactions consumers may have, it cannot provide an indication of the likely success of the new concept; this is better left to quantitative concept-test surveys.

In the early stages of concept testing, a large field of alternative concepts might exist, requiring concept-screening surveys. Concept-screening surveys provide a quick means to narrow the field of options; however they provide little depth of insight and cannot be compared to a normative database due to interactions between concepts. For greater insight and to reach decisions on whether or not pursue further product development, monadic concept-testing surveys must be conducted.

Frequently concept testing surveys are described as either monadic, sequential monadic or comparative. The terms mainly refer to how the concepts are displayed:

1.) Monadic. The concept is evaluated in isolation. 2.) Sequential monadic. Multiple concepts are evaluated in sequence (often randomized order). 3.) Comparative. Concepts are shown next to each other. 4.) Proto-monadic. Concepts are first shown in sequence, and then next to each other.

"Monadic testing is the recommended method for most concept testing. Interaction effects and biases are avoided. Results from one test can be compared to results from previous monadic tests. A normative database can be constructed."[7] However, each has its specific uses and it depends on the research objectives. The decision as to which method to use is best left to experience research professionals to decide, as there are numerous implications in terms of how the results are interpreted.

Evaluating concept-test scores

Traditionally concept-test survey results are compared to 'norms databases'.[8] These are databases of previous new-product concept tests. These must be 'monadic' concept tests, to prevent interaction effects. To be fair, it is important that these databases contain 'new' concept test results, not ratings of old products that consumers are already familiar with; since once consumers become familiar with a product the ratings often drop. Comparing new concept ratings to the ratings for an existing product already on the market would result in an invalid comparison, unless special precautions are taken by researchers to reduce or adjust for this effect quantitatively. Additionally, the concept is usually only compared to norms from the same product category, and the same country.

Companies that specialize in this area, tend to have developed their own unique systems, each with its own standards. Keeping to these standards consistently is important to preventing contamination of the results.

Perhaps one of the famous concept-test systems is the Nielsen Bases system, which comes in different versions. Other well-known products include Decision Analyst's 'Concept Check', Acupoll's 'Concept Optimizer', Ipsos Innoquest and GFK. Examples of smaller players include Skuuber and Acentric Express Test.

Determining the importance of concept attributes as purchase drivers

The simplest approach to determining attribute importance is to ask direct open-ended questions. Alternatively checklists or ratings of the importance of each product attribute may be used.

However, various debates have existed over whether or not consumers could be trusted to directly indicate the level of importance of each product attribute. As a result, correlation analysis and various forms of multiple regression have often been used for identifying importance - as an alternative to direct questions.

A complementary technique to concept testing, is conjoint analysis (also referred to as discrete choice modelling). Various forms of conjoint analysis and discrete choice modelling exist. While academics stress the differences between the two, in practice there is often little difference. These techniques estimate the importance of product attributes indirectly, by creating alternative products according to an experimental design, and then using consumer responses to these alternatives (usually ratings of purchase likelihood or choices made between alternatives) to estimate importance. The results are often expressed in the form of a 'simulator' tool which allows clients to test alternative product configurations and pricing.

Volumetric concept testing

Volumetric concept testing falls somewhere between traditional concept testing and pre-test market models (simulated test market models are similar but emphasize greater realism) in terms of the level of complexity. The aim is to provide 'approximate' sales volume forecasts for the new concept prior to launch. They incorporate other variables beyond just input from the concept test survey itself, such as the distribution strategy.

Examples of volumetric forecasting methodologies include 'Acupoll Foresight'[9] and Decision Analyst's 'Conceptor'.[10]

Some models (more properly referred to as 'pre-test market models' or 'simulated test markets')[11] gather additional data from a follow-up product testing survey (especially in the case of consumer packaged goods as repeat purchase rates need to be estimated). They may also include advertisement testing component that aims to assess advertising quality. Some such as Decision Analyst, include discrete choice models / conjoint analysis.

gollark: I don't know. There's probably some way to calculate this, but meh.
gollark: Or be very collision-prone on that sort of data?
gollark: The hash function would have to have a really tiny range for short words to frequently collide like that.
gollark: Citrons saying them around the same time, mostly.
gollark: I was accidentally not actually prime-factoring it (oops), but it's still unsorted given that.

See also

References

  1. Wind, Yoram (1984). NEW-PRODUCT FORECASTING MODELS AND APPLICATIONS. Lexington Books. ISBN 978-0-669-04102-6.
  2. Schwartz, David (1987). Concept Testing: How to Test New Product Ideas Before You Go to Market (1st ed.). American Management Association. ISBN 978-0814459058.
  3. Lord, F. and Novick, M. R.(1968). Statistical theories of mental test scores. Addison – Wesley.
  4. Heise, D. R.(1969). Separating reliability and stability in test-retest correlation. American Sociological Review, 34, 93-101.
  5. Andrews, F. M. (1984). Construct validity and error components of survey measures: a structural modelling approach. Public Opinion Quarterly, 48, 409-442.
  6. Saris, W. E. and Gallhofer, I. N. (2014). Design, evaluation and analysis of questionnaires for survey research. Second Edition. Hoboken, Wiley.
  7. Thomas, Jerry (2016-01-11). "Concept Testing (And The "Uniqueness" Paradox)". Decision Analyst. Decision Analyst. Retrieved 21 April 2017.
  8. Thomas, Jerry (2016-01-11). "Concept Testing (And The "Uniqueness" Paradox)". Decision Analyst. Decision Analyst. Retrieved 21 April 2017.
  9. "ForeSIGHT™ Going-Year Volume Estimates". Acupoll. Archived from the original on 31 March 2017. Retrieved 21 April 2017.
  10. "Conceptor® Volumetric Forecasting". Decision Analyst. 2015-12-28. Retrieved 21 April 2017.
  11. Wind, Yoram (1984). NEW-PRODUCT FORECASTING MODELS AND APPLICATIONS. Lexington Books. ISBN 978-0-669-04102-6.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.