Research in Higher Education

, Volume 58, Issue 8, pp 922–933 | Cite as

Using Rasch Analysis to Inform Rating Scale Development

Short Paper/Note

Abstract

The use of surveys, questionnaires, and rating scales to measure important outcomes in higher education is pervasive, but reliability and validity information is often based on problematic Classical Test Theory approaches. Rasch Analysis, based on Item Response Theory, provides a better alternative for examining the psychometric quality of rating scales and informing scale improvements. This paper outlines a six-step process for using Rasch Analysis to review the psychometric properties of a rating scale. The Partial Credit Model and Andrich Rating Scale Model will be described in terms of the pyschometric information (i.e., reliability, validity, and item difficulty) and diagnostic indices generated. Further, this approach will be illustrated through the example of authentic data from a university-wide student evaluation of teaching.

Keywords

Higher education Rasch Analysis Likert-type scale Partial Credit Model Rating Scale Model Scale development 

References

  1. Andrich, D. (1978). Rating formulation for ordered response category. Psychometrika, 43(4), 561–573.CrossRefGoogle Scholar
  2. Bond, T. G., & Fox, C. M. (2012). Applying the Rasch Model: Fundamental measurement in the human sciences (2nd ed.). New York: Routledge.Google Scholar
  3. Bradley, K., Peabody, M., Akers, K., & Knutson, N. (2015). Rating scales in survey research: Using the Rasch model to illustrate the middle category measurement flaw. Survey Practice, 8(2). Retrieved from http://www.surveypractice.org/index.php/SurveyPractice/article/view/266.
  4. Darby, J. A. (2008). Course evaluations: A tendency to respond ‘favourably’ on scales? Quality Assurance in Education, 16, 7–18.CrossRefGoogle Scholar
  5. Embretson, S. A., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  6. Linacre, J. M. (2014a). A user’s guide to Winsteps Ministeps Rasch-Model computer programs. Chicago, IL: Author.Google Scholar
  7. Linacre, J. M. (2014b). Sample size and item calibration (or person measure) stability. Institute for Objective Measurement. Retrieved October 7, 2014, from http://www.rasch.org/rmt/rmt74m.htm.
  8. Marsh, H. W. (1987). Students’ evaluations of university teaching: Research findings, methodological issues, and directions for future research. International Journal of Educational Research, 11(3), 253–388.CrossRefGoogle Scholar
  9. Osterlind, S. J. (2009). Modern measurement: Theory, principles, and applications of mental appraisal. Upper Saddle River, NJ: Pearson.Google Scholar
  10. Purdue University Center for Instructional Excellence. (2014). PICES item catalog. Retrieved August 26, 2014, from http://www.purdue.edu/cie/data/pices.html.
  11. Wright, B. D., & Masters, G. D. (1982). Rating scale analysis. Chicago, IL: Mesa Press.Google Scholar
  12. Zlatkin-Troitschanskaia, O., Shavelson, R. J., & Kuhn, C. (2015). The international state of research on measurement of competency in higher education. Studies in Higher Education, 40(3), 393–411.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.UB CurriculumState University of New York at BuffaloBuffaloUSA

Personalised recommendations