Skip to main content

Comparative Pairs Marking Supports Authentic Assessment of Practical Performance Within Constructivist Learning Environments

  • Chapter
Applications of Rasch Measurement in Learning Environments Research

Part of the book series: Advances in Learning Environments Research ((ALER,volume 2))

Abstract

This chapter reports on the first year of an applied research project that utilises new digital technologies to address the challenge of embedding authentic complex performance as an integral part of summative assessment in senior secondary courses. Specifically, it reports on the approaches to marking authentic assessment tasks to meet the requirements of external examination. On-line marking tools were developed utilising relational databases to support the use of the analytical rubric-based marking method and the paired comparisons method to generate scores based on Rasch modelling. The research is notable in seeking to simultaneously enhance assessment and marking practices in examination contexts and in so doing, also contribute to the advancement of pedagogical practices associated with constructivist learning environments. The chapter will be relevant to courses and subjects that incorporate a significant performance dimension.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27–30.

    Article  Google Scholar 

  • Australian Bureau of Statistics. (2002). Measuring a knowledge based economy and society: An Australian framework (Discussion Paper). Canberra: Australian Bureau of Statistics. (Document Number)

    Google Scholar 

  • Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.

    Google Scholar 

  • Brewer, C. A. (2004). Near real-time assessment of student learning and understanding in biology courses. Bioscience, 54(11), 1034.

    Article  Google Scholar 

  • Bull, J., & Sharp, D. (2000). Developments in computer-assisted assessment in UK higher education. Paper presented at the Learning to Choose: Choosing to Learn, Queensland, Australia.

    Google Scholar 

  • Burns, R. B. (1996). Introduction to research methods. South Melbourne, Australia: Addison Wesley Longman Australia Pty. Limited.

    Google Scholar 

  • DeCorte, E. (1990). Learning with new information technologies in schools: Perspectives from the psychology of learning and instruction. Journal of Computer Assisted Learning, 6, 69–87.

    Article  Google Scholar 

  • Dede, C. (2003). No cliche left behind: Why education policy is not like the movies. Educational Technology, 43(2), 5–10.

    Google Scholar 

  • Fraser, B. J. (1994). Research on classroom and school climate. In D. Gabel (Ed.), Handbook of research on science teaching and learning (pp. 493–541). New York: Macmillan.

    Google Scholar 

  • Garmire, E., & Pearson, G. (Eds.). (2006). Tech tally: Approaches to assessing technological literacy. Washington, DC: National Academy Press.

    Google Scholar 

  • Glickman, C. (1991). Pretending not to know what we know. Educational Leadership, 48(8), 4–10.

    Google Scholar 

  • Grant, L. (2008). Assessment for social justice and the potential role of new technologies. London: Futurelabo. (Document Number)

    Google Scholar 

  • Hay, P. J., & Macdonald, D. (2008). (Mis)appropriations of criteria and standards-referenced assessment in a performance-based subject. Assessment in Education: Principles, Policy & Practice, 15(2), 153–168.

    Article  Google Scholar 

  • Jonassen, D. (2000). Design of constructivist learning environments. Retrieved March 9, 2007, from http://tiger.coe.missouri.edu/~jonassen/courses/CLE/

  • Kimbell, R. (2004). Design & technology. In J. White (Ed.), Rethinking the school curriculum: Values, aims and purposes (pp. 40–59). New York & London: Routledge Falmer.

    Google Scholar 

  • Kimbell, R., & Wheeler, T. (2005). Project e-scape: Phase 1 report. London: Technology Education Research Unit, Goldsmiths Collegeo. (Document Number)

    Google Scholar 

  • Kimbell, R., Wheeler, T., Miller, A., & Pollitt, A. (2007). e-scape: e-solutions for creative assessment in portfolio environments. London: Technology Education Research Unit, Goldsmiths College. (Document Number)

    Google Scholar 

  • Koretz, D. (1998). Large-scale portfolio assessments in the US: Evidence pertaining to the quality of measurement. Assessment in Education, 5(3), 309–334.

    Article  Google Scholar 

  • Kozma, R. B. (2009). Transforming education: Assessing and teaching 21st century skills. InF. Scheuermann & J. Bojornsson (Eds.), The transition to computer-based assessment (pp. 13–23). Ispra, Italy: European Commission. Joint Research Centre.

    Google Scholar 

  • Lane, S. (2004). Validity of high-stakes assessment: Are students engaged in complex thinking? Educational Measurement, Issues and Practice, 23(3), 6–14.

    Article  Google Scholar 

  • Lin, H., & Dwyer, F. (2006). The fingertip effects of computer-based assessment in education. TechTrends, 50(6), 27–31.

    Article  Google Scholar 

  • Lynch, W. (1990). Social aspects of human-computer interaction. Educational Technology, 30(4),26–31.

    Google Scholar 

  • Madeja, S. S. (2004). Alternative assessment strategies for schools. Education Policy Review, 105(5), 3–13.

    Google Scholar 

  • McGaw, B. (2006). Assessment to fit for purpose. Paper presented at the 32nd annual conference of the International Association for Educational Assessment, Singapore.

    Google Scholar 

  • Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23.

    Article  Google Scholar 

  • Mevarech, A. R., & Light, P. H. (1992). Peer-based interaction at the computer: Looking backward, looking forward. Learning and Instruction, 2, 275–280.

    Article  Google Scholar 

  • Newhouse, C. P., Clarkson, B., & Trinidad, S. (2005). A framework for leading school change in using ICT. In S. Trinidad & J. Pearson. (Eds.), Using information and communication technologies in education. Singapore: Prentice-Hall, Pearson Education South Asia.

    Google Scholar 

  • Newhouse, C. P., Williams, P. J., & Pearson, J. (2006). Supporting mobile education for pre-service teachers. Australasian Journal of Educational Technology, 22(3), 289–311.

    Google Scholar 

  • Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.

    Google Scholar 

  • Pollitt, A. (2004, June). Let’s stop marking exams. Paper presented at the International Association for Educational Assessment conference, Philadelphia.

    Google Scholar 

  • Richardson, H. C., & Ward, R. (2005). Developing and implementing a methodology for reviewing e-portfolio products. Wigan, UK: Joint Information Systems Committeeo. (Document Number)

    Google Scholar 

  • Ridgway, J., McCusker, S., & Pead, D. (2006). Report 10: Literature review of e-assessment. Futurelab. (Document Number)

    Google Scholar 

  • Spector, J. M. (2006). A methodology for assessing learning in complex and ill-structured task domains. Innovations in Education and Teaching International, 43(2), 109–120.

    Article  Google Scholar 

  • Tierney, R., & Marielle, S. (2004). What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels. [Electronic Version]. Practical Assessment, Research and Evaluation, 9(2). Retrieved April 9, 2005, from http://PAREonline.net/getvn.asp?v=9&n=2

  • Tognolini, J. (2006). Meeting the challenge of assessing in a standards based education system. Perth, Australia: Curriculum Council of Western Australiao. (Document Number)

    Google Scholar 

  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge: Harvard University Press.

    Google Scholar 

  • Whitefield, A. (2004). Laptop computers as a mediating tool between teacher beliefs and pedagogy. Geelong: Deakin University.

    Google Scholar 

  • Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass.

    Google Scholar 

  • Willig, C. (2001). Introducing qualitative research in psychology adventures in theory and method. Buckingham: Open University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Sense Publishers

About this chapter

Cite this chapter

Newhouse, P. (2011). Comparative Pairs Marking Supports Authentic Assessment of Practical Performance Within Constructivist Learning Environments. In: Cavanagh, R.F., Waugh, R.F. (eds) Applications of Rasch Measurement in Learning Environments Research. Advances in Learning Environments Research, vol 2. SensePublishers, Rotterdam. https://doi.org/10.1007/978-94-6091-493-5_7

Download citation

Publish with us

Policies and ethics

Societies and partnerships