Abstract
This chapter reports on the first year of an applied research project that utilises new digital technologies to address the challenge of embedding authentic complex performance as an integral part of summative assessment in senior secondary courses. Specifically, it reports on the approaches to marking authentic assessment tasks to meet the requirements of external examination. On-line marking tools were developed utilising relational databases to support the use of the analytical rubric-based marking method and the paired comparisons method to generate scores based on Rasch modelling. The research is notable in seeking to simultaneously enhance assessment and marking practices in examination contexts and in so doing, also contribute to the advancement of pedagogical practices associated with constructivist learning environments. The chapter will be relevant to courses and subjects that incorporate a significant performance dimension.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27–30.
Australian Bureau of Statistics. (2002). Measuring a knowledge based economy and society: An Australian framework (Discussion Paper). Canberra: Australian Bureau of Statistics. (Document Number)
Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.
Brewer, C. A. (2004). Near real-time assessment of student learning and understanding in biology courses. Bioscience, 54(11), 1034.
Bull, J., & Sharp, D. (2000). Developments in computer-assisted assessment in UK higher education. Paper presented at the Learning to Choose: Choosing to Learn, Queensland, Australia.
Burns, R. B. (1996). Introduction to research methods. South Melbourne, Australia: Addison Wesley Longman Australia Pty. Limited.
DeCorte, E. (1990). Learning with new information technologies in schools: Perspectives from the psychology of learning and instruction. Journal of Computer Assisted Learning, 6, 69–87.
Dede, C. (2003). No cliche left behind: Why education policy is not like the movies. Educational Technology, 43(2), 5–10.
Fraser, B. J. (1994). Research on classroom and school climate. In D. Gabel (Ed.), Handbook of research on science teaching and learning (pp. 493–541). New York: Macmillan.
Garmire, E., & Pearson, G. (Eds.). (2006). Tech tally: Approaches to assessing technological literacy. Washington, DC: National Academy Press.
Glickman, C. (1991). Pretending not to know what we know. Educational Leadership, 48(8), 4–10.
Grant, L. (2008). Assessment for social justice and the potential role of new technologies. London: Futurelabo. (Document Number)
Hay, P. J., & Macdonald, D. (2008). (Mis)appropriations of criteria and standards-referenced assessment in a performance-based subject. Assessment in Education: Principles, Policy & Practice, 15(2), 153–168.
Jonassen, D. (2000). Design of constructivist learning environments. Retrieved March 9, 2007, from http://tiger.coe.missouri.edu/~jonassen/courses/CLE/
Kimbell, R. (2004). Design & technology. In J. White (Ed.), Rethinking the school curriculum: Values, aims and purposes (pp. 40–59). New York & London: Routledge Falmer.
Kimbell, R., & Wheeler, T. (2005). Project e-scape: Phase 1 report. London: Technology Education Research Unit, Goldsmiths Collegeo. (Document Number)
Kimbell, R., Wheeler, T., Miller, A., & Pollitt, A. (2007). e-scape: e-solutions for creative assessment in portfolio environments. London: Technology Education Research Unit, Goldsmiths College. (Document Number)
Koretz, D. (1998). Large-scale portfolio assessments in the US: Evidence pertaining to the quality of measurement. Assessment in Education, 5(3), 309–334.
Kozma, R. B. (2009). Transforming education: Assessing and teaching 21st century skills. InF. Scheuermann & J. Bojornsson (Eds.), The transition to computer-based assessment (pp. 13–23). Ispra, Italy: European Commission. Joint Research Centre.
Lane, S. (2004). Validity of high-stakes assessment: Are students engaged in complex thinking? Educational Measurement, Issues and Practice, 23(3), 6–14.
Lin, H., & Dwyer, F. (2006). The fingertip effects of computer-based assessment in education. TechTrends, 50(6), 27–31.
Lynch, W. (1990). Social aspects of human-computer interaction. Educational Technology, 30(4),26–31.
Madeja, S. S. (2004). Alternative assessment strategies for schools. Education Policy Review, 105(5), 3–13.
McGaw, B. (2006). Assessment to fit for purpose. Paper presented at the 32nd annual conference of the International Association for Educational Assessment, Singapore.
Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23.
Mevarech, A. R., & Light, P. H. (1992). Peer-based interaction at the computer: Looking backward, looking forward. Learning and Instruction, 2, 275–280.
Newhouse, C. P., Clarkson, B., & Trinidad, S. (2005). A framework for leading school change in using ICT. In S. Trinidad & J. Pearson. (Eds.), Using information and communication technologies in education. Singapore: Prentice-Hall, Pearson Education South Asia.
Newhouse, C. P., Williams, P. J., & Pearson, J. (2006). Supporting mobile education for pre-service teachers. Australasian Journal of Educational Technology, 22(3), 289–311.
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.
Pollitt, A. (2004, June). Let’s stop marking exams. Paper presented at the International Association for Educational Assessment conference, Philadelphia.
Richardson, H. C., & Ward, R. (2005). Developing and implementing a methodology for reviewing e-portfolio products. Wigan, UK: Joint Information Systems Committeeo. (Document Number)
Ridgway, J., McCusker, S., & Pead, D. (2006). Report 10: Literature review of e-assessment. Futurelab. (Document Number)
Spector, J. M. (2006). A methodology for assessing learning in complex and ill-structured task domains. Innovations in Education and Teaching International, 43(2), 109–120.
Tierney, R., & Marielle, S. (2004). What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels. [Electronic Version]. Practical Assessment, Research and Evaluation, 9(2). Retrieved April 9, 2005, from http://PAREonline.net/getvn.asp?v=9&n=2
Tognolini, J. (2006). Meeting the challenge of assessing in a standards based education system. Perth, Australia: Curriculum Council of Western Australiao. (Document Number)
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge: Harvard University Press.
Whitefield, A. (2004). Laptop computers as a mediating tool between teacher beliefs and pedagogy. Geelong: Deakin University.
Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass.
Willig, C. (2001). Introducing qualitative research in psychology adventures in theory and method. Buckingham: Open University Press.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Sense Publishers
About this chapter
Cite this chapter
Newhouse, P. (2011). Comparative Pairs Marking Supports Authentic Assessment of Practical Performance Within Constructivist Learning Environments. In: Cavanagh, R.F., Waugh, R.F. (eds) Applications of Rasch Measurement in Learning Environments Research. Advances in Learning Environments Research, vol 2. SensePublishers, Rotterdam. https://doi.org/10.1007/978-94-6091-493-5_7
Download citation
DOI: https://doi.org/10.1007/978-94-6091-493-5_7
Publisher Name: SensePublishers, Rotterdam
Online ISBN: 978-94-6091-493-5
eBook Packages: Humanities, Social Sciences and LawEducation (R0)