Advertisement

Ben Wright: A Multi-facet Analysis

  • Mary E. Lunz
  • John A. Stahl
Chapter
Part of the Springer Series in Measurement Science and Technology book series (SSMST)

Abstract

Dr. Benjamin D. Wright believed and taught that to understand the ways of the world, it is necessary to measure all relevant aspects on the same scale. When measurements are on the same scale, it is possible to do accurate comparisons and proper ordering. In this light the multi-facet model was developed. This is a “not so scientific” study of the multi-faceted aspects of Dr. Benjamin D. Wright. Three attributes were identified for the purposes of this study: (1) Contributions to Objective Measurement, (2) Attributes as a Teacher and Professor, and (3) Personal Attributes. Data were collected and analyzed using the multi-facet model, yielding a complex pattern of results for a multi-faceted person. The real story is the development the multi-facet model. We are grateful to Ben Wright and Mike Linacre for making this tool available to measurement professionals.

References

  1. Fischer, G. H. (1973). The linear logistic test model as an instrument in educational research. ACTA Psychologica, 37, 359–374.CrossRefGoogle Scholar
  2. Forsyth, R., Sarsangjan, V., & Gilmer, J. (1981). Some empirical results related to the robustness of the Rasch model. Applied Psychological Measurement, 5, 175–186.CrossRefGoogle Scholar
  3. Linacre, J. M. (1988). FACETS, a computer program for the analysis of multi-faceted data. Chicago: MESA Press.Google Scholar
  4. Linacre, J. M. (1989). Multi-faceted measurement. Chicago: MESA Press.Google Scholar
  5. Lunz, M. E., Wright, B. D., Stahl, J. A., & Linacre, J. M. (1989). Equating practical examinations. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco, CA.Google Scholar
  6. Lunz, M. E., & Stahl, J. A. (1990a). A comparison of intra and interjudge decision consistency using analytic and holistic scoring criteria. Journal of Allied Health, 19(2), 173–179.PubMedGoogle Scholar
  7. Lunz, M. E., & Stahl, J. A. (1990b). Judge consistency and severity across grading periods. Evaluation and the Health Professions, 13(4), 425–444.CrossRefGoogle Scholar
  8. Lunz, M. E., Stahl, J. A., & Wright, B. D. (1990). Criterion standards from benchmark performances for judge intermediated examinations. Paper presented at the annual meeting of the American Educational research Association, Boston.Google Scholar
  9. Lunz, M. E., Stahl, J. A., & Wright, B. D. (1991). The invariance of judge severity calibrations. Paper presented at the annual meeting of The American Educational research Association, Chicago.Google Scholar
  10. Lunz, M. E., & Stahl, J. A. (1992). New ways of thinking about reliability. Professional Education Researcher Quarterly, 13(4), 16–18.Google Scholar
  11. Lunz, M. E., & Stahl, J. A. (1993a). Impact of examiners on candidate scores: an introduction to the use of multifaceted rasch analysis for oral examinations. Teaching and Learning in Medicine, 5(3), 174–181.CrossRefGoogle Scholar
  12. Lunz, M. E., & Stahl, J. A. (1993b). The effect of rater severity on person ability measures: a Rasch model analysis. American Journal of Occupational Therapy, 47(4), 311–318.CrossRefPubMedGoogle Scholar
  13. Lunz, M. E., Stahl, J. A., & Wright, B. D. (1994). Interjudge reliability and decision reproducibility. Educational and Psychological Measurement, 54(4), 913–925.CrossRefGoogle Scholar
  14. Lunz, M. E., Stahl, J. A., & Wright, B. D. (1996). The invariance of judge severity calibrations. In G. Engelhard & M. Wilson (Eds.), Objective measurement: Theory into practice (Vol. 3, pp. 99–112). Norwood, New Jersey: Ablex.Google Scholar
  15. Lunz, M. (2000). Setting standards on performance examinations. In M. Wilson, G. Engelhard & K. Draney (Eds.), Objective measurement: Theory into practice, Vol. 5 (pp. 181–202). Stamford, CT: Ablex.Google Scholar
  16. MulQueen, C., & John A. Stahl, (1997). Multifaceted Rasch analysis of 360° performance assessment data. Paper presented at the annual meeting of the Society of Industrial and Occupational Psychologists, St. Louis.Google Scholar
  17. Myford, C. M., & Wolfe, E. W. (2004). Detecting and measuring rater effects using many-facet Rasch measurement: Part II. Journal of Applied Measurement, 5(2), 189–227.PubMedGoogle Scholar
  18. Myford, C. M., & Engelhard, G., Jr. (2002). Evaluating the psychometric quality of the National Board for Professional Teaching Standards Early Childhood/Generalist assessment system. Journal of Personnel Evaluation in Education, 15(4), 253–285.Google Scholar
  19. Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Chicago, IL: University of Chicago Press.Google Scholar
  20. Stahl, J. A., Lunz, M. E., & Wright, B. D. (1991). Equating examinations that include judges. Paper presented at the annual meeting of the American Educational research Association, Chicago.Google Scholar
  21. Stahl, J. A., & Lunz, M. E. (1992). Impact of additional person performance on person, judge and item calibrations. In M. Wilson (Ed.), Objective measurement: theory into practice (Vol. 2, pp. 189–206). Norwood, NJ: Ablex.Google Scholar
  22. Stahl, J. A., & Lunz, M. E. (1993). A comparison of generalizability theory and multi-faceted rasch measurement. Paper presented at the annual meeting of The American Educational Research Association, Atlanta.Google Scholar
  23. Stahl, J. A., & Lunz, M. E. (1996). Judge performance reports: media and message. In G. Engelhard & M. Wilson (Eds.), Objective measurement: Theory into practice (Vol. 3, pp. 113–125). Norwood, New Jersey: Ablex.Google Scholar
  24. Wolfe, E. W., Moulder, B. M., & Myford, C. M. (2001). Methods for detecting differential rater functioning over time (DRIFT). Journal of Applied Measurement, 2(3), 256–280.PubMedGoogle Scholar
  25. Wright, B. D. (1968). Sample-free test calibration and person measurement. In Proceedings of the 1967 invitational conference on testing problems (pp. 85–101). Princeton, NJ: Educational Testing Service. Retrieved from http://www.rasch.org/memo1.htm.Google Scholar
  26. Wright, B. D., & Stone, M. H. (1979). Best test design. Chicago: MESA Press.Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Pearson VUEChicagoUSA

Personalised recommendations