Things I Learned from Ben

Chapter
Part of the Springer Series in Measurement Science and Technology book series (SSMST)

Abstract

In this chapter I briefly describe four things I learned from Ben Wright.

References

  1. Angoff, W. H. (1971). Scales, norms, and equivalent scores. In R. L. Thorndike (Ed.), Educational measurement (2nd ed., pp. 508–600). Washington, DC: American Council on Education.Google Scholar
  2. Draney, K., & Wilson, M. (2009). Selecting cut scores with a composite of item types: The Construct Mapping procedure. In E. V. Smith & G. E. Stone (Eds.), Criterion-referenced testing: Practice analysis to score reporting using Rasch measurement (pp. 276–293). Maple Grove, MN: JAM Press.Google Scholar
  3. Mislevy, R. J., & Verhelst, N. (1990). Modeling item responses when different subjects employ different solution strategies. Psychometrika, 55, 195–215.CrossRefGoogle Scholar
  4. National Research Council. (2001). Knowing what students know: The science and design of educational assessments. Committee on the Foundations of Assessment. Pelligrino, J., Chudowsky, N., and Glaser, R. (Eds.). Board on Testing and Assessment, Center for Education. Division on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.Google Scholar
  5. Wilson, M. (1984). A psychometric model of hierarchical development. Unpublished doctoral dissertation, University of Chicago.Google Scholar
  6. Wilson, M. (1989). Saltus: A psychometric model of discontinuity in cognitive development. Psychological Bulletin, 105(2), 276–289.CrossRefGoogle Scholar
  7. Wilson, M. (2001, October). On choosing a model for measuring. Invited paper at the International Conference on Objective Measurement 3, Chicago, IL.Google Scholar
  8. Wilson, M. (2003). On choosing a model for measuring. Methods of Psychological Research, 8(3), 1–22. Download: http://www.dgps.de/fachgruppen/methoden/mpr-online/ (Reprinted in: Smith, E.V., and Smith, R. M. (Eds.) (2004). Introduction to Rasch Measurement. Maple Grove, MN: JAM Press.)Google Scholar
  9. Wilson, M. (2005). Constructing measures: An item response modeling approach. Mahwah, NJ: Erlbaum.Google Scholar
  10. Wilson, M. (2012). Responding to a challenge that learning progressions pose to measurement practice: Hypothesized links between dimensions of the outcome progression. In A. C. Alonzo & A. W. Gotwals (Eds.), Learning progressions in science. Rotterdam, The Netherlands: Sense Publishers.Google Scholar
  11. Wilson, M., & Draney, K. (2000, May). Standard Mapping: A technique for setting standards and maintaining them over time. Paper in an invited symposium entitled “Models and analyses for combining and calibrating items of different types over time” at the International Conference on Measurement and Multivariate Analysis, Banff, Canada.Google Scholar
  12. Wilson, M., & Draney, K. (2002). A technique for setting standards and maintaining them over time. In S. Nishisato, Y. Baba, H. Bozdogan, & K. Kanefugi (Eds.), Measurement and multivariate analysis (Proceedings of the International Conference on Measurement and Multivariate Analysis, Banff, Canada, May 12–14, 2000) (pp. 325–332). Tokyo: Springer.Google Scholar
  13. Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied Measurement in Education, 13(2), 181–208.CrossRefGoogle Scholar
  14. Wright, B. D., Enos, M. M., Enos, M., & Linacre, J. M. (2001). Adventures in questionnaire design: Poetics, posters, and provocations. Chicago: MESA Press.Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Graduate School of EducationUniversity of California, BerkeleyBerkeleyUSA

Personalised recommendations