Advertisement

Quality & Quantity

, Volume 52, Issue 3, pp 1227–1239 | Cite as

Comparison of four common data collection techniques to elicit preferences

  • Pasquale Anselmi
  • Luigi Fabbris
  • Maria Cristiana Martini
  • Egidio Robusto
Article
  • 182 Downloads

Abstract

We compare four common data collection techniques to elicit preferences: the rating of items, the ranking of items, the partitioning of a given amount of points among items, and a reduced form of the technique for comparing items in pairs. University students were randomly assigned a questionnaire employing one of the four techniques. All questionnaires incorporated the same collection of items. The data collected with the four techniques were converted into analogous preference matrices, and analyzed with the Bradley–Terry model. The techniques were evaluated with respect to the fit to the model, the precision and reliability of the item estimates, and the consistency among the produced item sequences. The rating, ranking and budget partitioning techniques performed similarly, whereas the reduced pair comparisons technique performed a little worse. The item sequence produced by the rating technique was very close to the sequence obtained averaging over the three other techniques.

Keywords

Rating Ranking Budget partitioning Paired comparisons Bradley–Terry model 

References

  1. Agresti, A.: An Introduction to Categorical Data Analysis, 2nd edn. Wiley, Hoboken (2007)CrossRefGoogle Scholar
  2. Aloysius, J.A., Fred, D.D., Darryl, D.W., Taylor, A.R., Kottemann, J.E.: User acceptance of multi-criteria decision support systems: the impact of preference elicitation techniques. Eur. J. Oper. Res. 169, 273–285 (2006)CrossRefGoogle Scholar
  3. Alwin, D.F., Krosnick, J.A.: The measurement of values in surveys: a comparison of ratings and rankings. Public Opin. Q. 49, 535–552 (1985)CrossRefGoogle Scholar
  4. Andrich, D.: A rating formulation for ordered response categories. Psychometrika 43, 561–573 (1978)CrossRefGoogle Scholar
  5. Bech, M., Gyrd-Hansen, D., Kjær, T., Lauridsen, J.T., Sørensen, J.: Graded pairs comparison—does strength of preference matter? Analysis of preferences for specialised nurse home visits for pain management. Health Econ. 16, 513–529 (2007)CrossRefGoogle Scholar
  6. Bollen, K.A.: Structural Equations with Latent Variables. Wiley, New York (1989)CrossRefGoogle Scholar
  7. Bradburn, N.M., Sudman, S., Wansink, B.: Asking Questions: The Definitive Guide to Questionnaire Design—For Market Research, Political Polls, and Social and Health Questionnairs, Revised edn. Jossey Bass, San Francisco (2004)Google Scholar
  8. Bradley, R.A., Terry, M.E.: Rank analysis of incomplete block designs: the method of paired comparisons. Biometrika 39, 324–345 (1952)Google Scholar
  9. Coombs, C.H.: A Theory of Data. Wiley, Oxford (1964)Google Scholar
  10. David, H.A.: The Method of Paired Comparisons, 2nd edn. Chapman and Hall, London (1988)Google Scholar
  11. Elrod, T., Louviere, J.J., Davey, K.S.: An empirical comparison of ratings-based and choice-based conjoint models. J. Mark. Res. 29, 368–377 (1992)CrossRefGoogle Scholar
  12. Fabbris, L.: Measurement scales for scoring or ranking sets of interrelated items. In: Davino, C., Fabbris, L. (eds.) Survey Data Collection and Integration, pp. 21–44. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  13. Feather, N.T.: The measurement of values: effects of different assessment procedures. Aust. J. Psychol. 25, 221–231 (1973)CrossRefGoogle Scholar
  14. Fienberg, S.E., Larntz, K.: Log linear representation for paired and multiple comparisons models. Biometrika 63, 245–254 (1976)CrossRefGoogle Scholar
  15. Fisher Jr., W.P.: Reliability, separation, strata statistics. Rasch Meas. Trans. 6, 238 (1992)Google Scholar
  16. Guttman, L.: An approach for quantifying paired-comparisons and rank order. Ann. Math. Stat. 17, 143–163 (1946)Google Scholar
  17. Hauser, J.R., Rao, V.: Conjoint analysis, related modeling, and applications. In: Wind, Y., Green, P.E. (eds.) Marketing Research and Modeling: Progress and Prospects: A Tribute to Paul E. Green, pp. 141–158. Springer, New York (2004)CrossRefGoogle Scholar
  18. Huber, P.J.: Pairwise comparison and ranking: optimum properties of the row sum procedure. Ann. Math. Stat. 34, 511–520 (1963)CrossRefGoogle Scholar
  19. Huber, J., Wittink, D.R., Fiedler, J.A., Miller, R.: The effectiveness of alternative preference elicitation procedures in predicting choice. J. Mark. Res. 30, 105–114 (1993)CrossRefGoogle Scholar
  20. Jech, T.: A quantitative theory of preferences: some results on transition functions. Soc. Choice Welf 6, 301–314 (1989)CrossRefGoogle Scholar
  21. Krosnick, J.A., Alwin, D.F.: A test of the form-resistant correlation hypothesis: ratings, rankings, and the measurement of values. Public Opin. Q. 52, 526–538 (1988)CrossRefGoogle Scholar
  22. Linacre, J.M.: What do infit and outfit, mean-square and standardized mean? Rasch Meas. Trans. 16, 878 (2002)Google Scholar
  23. Linacre, J.M.: Facets Computer Program for Many-Facet Rasch Measurement, Version 3.70.0. Winsteps.com, Beaverton (2012)Google Scholar
  24. Louviere, J.J., Hensher, D.A., Swait, J.D.: Stated Choice Methods. Analysis and Application. Cambridge University Press, Cambridge (2003)Google Scholar
  25. Luce, R.D.: Individual Choice Behavior: A Theoretical Analysis. Wiley, New York (1959)Google Scholar
  26. Maio, G.R., Roese, N.J., Seligman, C., Katz, A.: Rankings, ratings, and the measurement of values: evidence for the superior validity of ratings. Basic Appl. Soc. Psychol. 18, 171–181 (1996)CrossRefGoogle Scholar
  27. McFadden, D.: The choice theory approach to market research. Mark. Sci. 5, 275–297 (1986)CrossRefGoogle Scholar
  28. Smith Jr., E.V.: Evidence for the reliability of measures and validity of measure interpretation: a Rasch measurement perspective. J. Appl. Meas. 2, 281–311 (2001)Google Scholar
  29. Takane, Y.: Maximum likelihood additivity analysis. Psychometrika 17, 225–240 (1982)CrossRefGoogle Scholar
  30. Takane, Y.: Analysis of covariance structures and probabilistic binary choice data. In: de Soete, G., Feger, H., Klauer, K.C. (eds.) New Developments in Psychological Choice Modeling, pp. 139–160. North Holland, Amsterdam (1989)CrossRefGoogle Scholar
  31. Thurstone, L.L.: A law of comparative judgment. Psychol. Rev. 34, 281–299 (1927)Google Scholar
  32. Torgerson, W.S.: Theory and Methods of Scaling. Wiley, New York (1958)Google Scholar
  33. Tversky, A., Russo, J.E.: Substitutability and similarity in binary choices. J. Math. Psychol. 6, 1–12 (1969)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2017

Authors and Affiliations

  • Pasquale Anselmi
    • 1
  • Luigi Fabbris
    • 2
  • Maria Cristiana Martini
    • 3
  • Egidio Robusto
    • 1
  1. 1.Department FISPPAUniversity of PaduaPaduaItaly
  2. 2.Department of StatisticsUniversity of PaduaPaduaItaly
  3. 3.Department of Communication and EconomicsUniversity of Modena and Reggio EmiliaReggio EmiliaItaly

Personalised recommendations