Skip to main content

Part of the book series: Advances in Learning Environments Research ((ALER,volume 2))

  • 1092 Accesses

Abstract

The Astronaut Invented Spelling Test (AIST) was designed as an assessment tool for monitoring the literacy progress of students in the early years of schooling. The test uses an invented spelling format, in which children are encouraged to try to write words that they have probably not been taught how to spell. In the initial development phase of the test, AIST spelling responses were scored to give credit to all reasonable representations of all the phonemes in the test words, with bonus points awarded for use of specific conventional patterns of English orthography. A Rasch analysis was subsequently used to explore the psychometric properties of the test, based on data from two large samples of Australian schoolchildren in their first four years of schooling (N=654 and N=533). Of the original 48 AIST items, 28 were found to provide an excellent fit to the Rasch measurement model. These 28 items, ordered from easy to hard, were calibrated in terms of item difficulty on the same scale as the student measures. The scales for both samples were unidimensional and reliable, with Person Separation Indices of 0.96. The order of item difficulty that emerged from the Rasch analysis provided strong evidence about the early development of phonemic and orthographic awareness. By combining the item difficulty scale and the student measures, the Rasch analysis of the AIST provides early literacy teachers with a classroom-based assessment tool that is not only psychometrically robust for screening purposes, but that also supports the choice of specific instructional targets in the classroom.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Aiken, L. R. (1982). Writing multiple-choice items to measure higher-order educational objectives. Educational and Psychological Measurement, 42(3), 803–806. doi:10.1177/001316448204200312.

    Article  Google Scholar 

  • Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43, 357–374.

    Article  Google Scholar 

  • Andrich, D. (2004). Controversy and the rasch model: A characteristic of incompatible paradigm. Medical Care, 42(1), I-7–I-16.

    Google Scholar 

  • Andrich, D. (2005). The rasch model explained. In S. Alagumalai, D. D. Curtis, & N. Hungi (Eds.), Applied Rasch measurement: A book exemplars (pp. 27–59). Dordrecht, The Netherlands: Springer.

    Google Scholar 

  • Andrich, D., Sheridan, B. E., & Luo, G. (2010). Rumm2030: A window program for Rasch unidimensional models for measurement. Perth, Australia: RUMM Laboratory.

    Google Scholar 

  • Andrich, D., & Styles, I. (2009). Distractors with information in multiple choice items: A rationale based on the Rasch model. In E. V. Smith, Jr. & G. E. Stone (Eds.), Criterion referenced testing: Practice analysis to score reporting using Rasch measurement models (pp. 24–70). Maple Grove, MN: JAM Press.

    Google Scholar 

  • Bock, D. (1972). Estimating item parameters and latent proficiency when the responses are scored in two or more nominal categories. Psychometrika, 37, 29–51.

    Article  Google Scholar 

  • DeMars, C. E. (2008). Scoring multiple choice items: A comparison of irt and classical polytomous and dichotomous methods. Paper presented at the annual meeting of the National Council of Measurement in Education, New York.

    Google Scholar 

  • Frederiksen, N. (1984). The real test bias: Influences of testing on teaching and learning. American Psychologist, 39(3), 193–202.

    Article  Google Scholar 

  • Green, B. F. (1981). A primer of testing. American Psychologist, 36(10), 1001–1011.

    Article  Google Scholar 

  • Haladyna, T. M. (2004). Developing and validating multiple-choice test items. Mahwah, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Penfield, R. D., & de la Torre, J. (2008). A new response model for multiple-choice items. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.

    Google Scholar 

  • Sadler, P. M. (1998). Psychometric model of student conceptions in science: Reconciling qualitative studies and distractor-driven assessment instruments. Journal of Research in Science Teaching, 35(3), 265–296.

    Article  Google Scholar 

  • Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14.

    Article  Google Scholar 

  • Thissen, D., Steinberg, L., & Fitzpatrick, A. R. (1989). Multiple-choice models: The distractors are also part of the item. Journal of Educational Measurement, 26(2), 161–176.

    Article  Google Scholar 

  • Wright, B. D., & Masters, G. N. (1982). Rating scale analysis: Rasch measurement. Chicago: MESA Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Sense Publishers

About this chapter

Cite this chapter

Neilson, R., Waugh, R.F., Konza, D. (2011). A Rasch Analysis of the Astronaut Invented Spelling Test. In: Cavanagh, R.F., Waugh, R.F. (eds) Applications of Rasch Measurement in Learning Environments Research. Advances in Learning Environments Research, vol 2. SensePublishers, Rotterdam. https://doi.org/10.1007/978-94-6091-493-5_3

Download citation

Publish with us

Policies and ethics

Societies and partnerships