Skip to main content

Establishing the Validity of Rating Scale Instrumentation in Learning Environment Investigations

  • Chapter

Part of the book series: Advances in Learning Environments Research ((ALER,volume 2))

Abstract

Rating scale instruments have been widely used in learning environment research for many decades. Arguments for their sustained use require provision of evidence commensurate with contemporary validity theory. The multiple-type conception of validity (e.g. content, criterion and construct), that persisted until the 1980s was subsumed into a unified view by Messick. He re-conceptualised types of validity as aspects of evidence for an overall judgment about construct validity. A validity argument relies on multiple forms of evidence. For example, the content, substantive, structural, generalisability aspect, external, and consequential aspects of validity evidence. The theoretical framework for the current study comprised these aspects of validity evidence with the addition of interpretability. The utility of this framework as a tool for examining validity issues in rating scale development and application was tested. An investigation into student engagement in classroom learning was examined to identify and assess aspects of validity evidence. The engagement investigation utilised a researcher-completed rating scale instrument comprising eleven items and a six-point scoring model. The Rasch Rating Scale model was used for scaling of data from 195 Western Australian secondary school students. Examples of most aspects of validity evidence were found, particularly in the statistical estimations and graphical displays generated by the Rasch model analysis. These are explained in relation to the unified theory of validity. The study is significant. It exemplifies contemporary validity theory in conjunction with modern measurement theory. It will be of interest to learning environment researchers using or considering using rating scale instruments.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   49.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Andrich, D. (1978a). Application of a psychometric rating model to ordered categories which are scored with successive integers. Applied Psychological Measurement, 2(4), 581–594.

    Article  Google Scholar 

  • Andrich, D. (1978b). Rating formulation for ordered response categories. Psychometrika, 43(4), 561–573.

    Article  Google Scholar 

  • Andrich, D. (1978c). Scaling attitude items constructed and scores in the Likert tradition. Educational and Psychological Measurement, 38(3), 665–680.

    Article  Google Scholar 

  • Andrich, D., Sheridan, B., Lyne, A., & Luo, G. (2003). RUMM: A windows-based item analysis program employing Rasch unidimensional measurement models. Perth: Murdoch University.

    Google Scholar 

  • Angoff, W. H. (1988). Validity: An evolving concept. In H. Wainer & H. Braun (Eds.), Test validity (pp. 9–13). Hillsdale, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Bronfenbrenner, U., & Ceci, S. J. (1994). Nature-nurture reconceptualised in developmental perspective: A bioecological model. Psychological Review, 101(4), 568–586.

    Article  Google Scholar 

  • Caracelli, V. J., & Greene, J. C. (1997). Crafting mixed-method designs. In J. C. Greene &V. J. Caracelli (Eds.), New directions for evaluation (pp. 19–33). San Francisco: Jossey-Bass Publishers.

    Google Scholar 

  • Cavanagh, R. F., & Kennish, P. (2009). Quantifying student engagement in classroom learning: Student learning capabilities and the expectations of their learning. Paper submitted for presentation at the 2009 annual conference of the Australian Association for Research in Education, Canberra.

    Google Scholar 

  • Cavanagh, R. F., Kennish, P., & Sturgess, K. (2008a). Development of theoretical frameworks to inform measurement of secondary school student engagement with learning. Paper presented at the 2008 Annual Conference of the Australian Association for Research in Education, Brisbane.

    Google Scholar 

  • Cavanagh, R. F., Kennish, P., & Sturgess, K. (2008b). Ordinality and intervality in pilot study data from instruments designed to measure student engagement in classroom learning. Paper presented at the 2008 annual conference of the Australian Association for Research in Education, Brisbane.

    Google Scholar 

  • Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302.

    Article  Google Scholar 

  • Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York: Harper & Row.

    Google Scholar 

  • Fullarton, S. (2002). Student engagement with school: Individual and school-level influences. Camberwell, Victoria: ACER.

    Google Scholar 

  • Kane, M. T. (2001). Current concerns in validity theory. Journal of Educational Measurement, 38(4), 319–342.

    Article  Google Scholar 

  • Kennish, P., & Cavanagh, R. F. (2009). How engaged are they? An inductive analysis of country student views of their engagement in classroom learning. Paper submitted for presentation at the 2009 annual conference of the Australian Association for Research in Education, Canberra.

    Google Scholar 

  • Marjoribanks, K. (2002a). Family background, individual and environmental influences on adolescent’s aspirations. Educational Studies, 28(1), 33–46.

    Article  Google Scholar 

  • Marjoribanks, K. (2002b). Environmental and individual influences on Australian students’ likelihood of staying in school. Journal of Genetic Psychology, 163(3), 368–381.

    Article  Google Scholar 

  • Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749.

    Article  Google Scholar 

  • Messick, S. (1998). Test validity: A matter of consequences. Social Indicators Research, 45(4), 35–44.

    Article  Google Scholar 

  • Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Chicago: MESA Press.

    Google Scholar 

  • Shernoff, D. J., Csikszentmihalyi, M., Schneider, B., & Shernoff, E. S. (2003). Student engagement in high school classrooms from the perspective of flow theory. School Psychology Quarterly, 18(2), 158–176.

    Article  Google Scholar 

  • Wolfe, E. W., & Smith, E. V. (2007a). Instrument development tools and activities for measure validation using Rasch models: Part 1 – instrument development tools. Journal of Applied Measurement, 8(1), 97–123.

    Google Scholar 

  • Wolfe, E. W., & Smith, E. V. (2007b). Instrument development tools and activities for measure validation using Rasch models: Part 1I – validation activities. Journal of Applied Measurement, 8(2), 294–234.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Sense Publishers

About this chapter

Cite this chapter

Cavanagh, R.F. (2011). Establishing the Validity of Rating Scale Instrumentation in Learning Environment Investigations. In: Cavanagh, R.F., Waugh, R.F. (eds) Applications of Rasch Measurement in Learning Environments Research. Advances in Learning Environments Research, vol 2. SensePublishers, Rotterdam. https://doi.org/10.1007/978-94-6091-493-5_5

Download citation

Publish with us

Policies and ethics

Societies and partnerships