Advertisement

Digital Forms of Assessment in Schools: Supporting the Processes to Improve Outcomes

  • C. Paul NewhouseEmail author
Living reference work entry

Abstract

This chapter discusses the critical roles digital technologies can play in improving assessment outcomes, and thus teaching in schools. It argues that because teaching in schools is driven by summative assessment, to meet twenty-first-century learning demands, this needs to be refocused toward measuring deep conceptual understanding and authentic performance. To achieve this, digital technologies can be used to support the full range of processes from formulating and implementing the assessment tasks through to judging performance, providing feedback, and ensuring consistency of outcomes. Appropriate approaches to these processes can be supported, including capturing performance in digital form, making holistic relative judgments based on a range of evidence, and embedding assessment in learning. Further, digital technologies can be used to create and collate portfolios of evidence, including from e-exams, for the purpose of learning analytics. Components of these alternative approaches to summative assessment are illustrated from over 8 years of research conducted in Western Australia by the Centre for Schooling and Learning Technologies (CSaLT) at Edith Cowan University. There was a focus on high-stakes senior secondary assessment in courses that included substantial outcomes involving some form of practical performance, such as was found in Engineering Studies, Physical Education Studies, Applied Information Technology, Italian Studies, Visual Arts, and Design. This research has shown how digital technologies may be used to support a range of forms of assessment, including types of “exams” and e-portfolios, to measure understanding and performance using analytic and holistic relative judgments to provide both quantitative and qualitative feedback to students and teachers.

Keywords

e-assessment Validity Reliability Holistic judgment Digital portfolio Computer-based exam 

References

  1. Adie, L. E. (2011). An investigation into online moderation. Assessment Matters, 3, 5–27.Google Scholar
  2. Adie, L. E., Klenowski, V., & Wyatt-Smith, C. (2012). Towards an understanding of teacher judgement in the context of social moderation. Educational Review, 64(2), 223–240.CrossRefGoogle Scholar
  3. Clarke-Midura, J., & Dede, C. (2010). Assessment, technology, and change. Journal of Research on Technology in Education, 42(3), 309–328.CrossRefGoogle Scholar
  4. Coorey, M. (1998, November 2). Principals call time for final exams. The Australian, 5.Google Scholar
  5. Dochy, F. (2009). The Edumetric quality of new modes of assessment: Some issues and prospects. In G. Joughin (Ed.), Assessment, learning and judgement in higher education (pp. 85–114). Wollongong, Australia: University of Wollongong.Google Scholar
  6. Ferguson, R. (2012). Learning analytics: Drivers, developments and challenges. International Journal of Technology Enhanced Learning, 4(5/6), 304–317. doi:10.1504/IJTEL.2012.051816.CrossRefGoogle Scholar
  7. Garmire, E., & Pearson, G. (Eds.). (2006). Tech tally: Approaches to assessing technological literacy. Washington, DC: National Academy Press.Google Scholar
  8. Gill, T., & Bramley, T. (2013). How accurate are examiners’ holistic judgements of script quality? Assessment in Education: Principles, Policy & Practice, 20(3), 308–324. doi:10.1080/0969594X.2013.779229.CrossRefGoogle Scholar
  9. Gillis, S., Polesel, J., & Wu, M. (2016). PISA data: Raising concerns with its use in policy settings. Australian Educational Researcher, 43(1), 131–146. doi:10.1007/s13384-015-0183-2.CrossRefGoogle Scholar
  10. Hench, T. L. (2013). Electronic assessment: Past, present, and future. Paper presented at the International Computer Assisted Assessment (CAA) Conference, Southampton, UK. Paper retrieved 7 Feb 2014 from http://caaconference.co.uk/wp-content/uploads/Hench_caa-2013-Electronic-Assessment-Past-Present-and-Future_TLH_Delaware-County-Community-College_v1.0.pdf
  11. Henderson, M., & Phillips, M. (2014). Technology enhanced feedback on assessment. In conference proceedings, Australian Computers in Education Conference 2014 (pp. 11). Adelaide, South Australia.Google Scholar
  12. Hung, H.-T., Chiu, Y.-C. J., & Yeh, H.-C. (2013). Multimodal assessment of and for learning: A theory-driven design rubric. British Journal of Educational Technology, 44(3), 400–409. doi:10.1111/j.1467-8535.2012.01337.x.CrossRefGoogle Scholar
  13. Karpati, A., Zempleni, A., Verhelst, N. D., Velduijzen, N. H., & Schonau, D. W. (1998). Expert agreement in judging art projects – A myth or reality? Studies in Educational Evaluation, 24(4), 385–404.CrossRefGoogle Scholar
  14. Kimbell, R. (2012). The origins and underpinning principles of e-scape. International Journal of Technology and Design Education, 22(2), 123–134.CrossRefGoogle Scholar
  15. Koretz, D. (1998). Large-scale portfolio assessments in the US: Evidence pertaining to the quality of measurement. Assessment in Education, 5(3), 309–334.CrossRefGoogle Scholar
  16. Kozma, R. B. (2009). Transforming education: Assessing and teaching 21st century skills. In F. Scheuermann & J. Bojornsson (Eds.), The transition to computer-based assessment (pp. 13–23). Ispra, Italy: European Commission. Joint Research Centre.Google Scholar
  17. Kposowa, A. J., & Valdez, A. D. (2013). Student laptop use and scores on standardized tests. Journal of Educational Computing Research, 48(3), 345–379.CrossRefGoogle Scholar
  18. Masters, G. N. (2013). Reforming educational assessment: Imperatives, principles and challenges. In S. Mellor (Series Ed.), Australian education review. Melbourne, Victoria: ACER Press.Google Scholar
  19. McGaw, B. (2006). Assessment to fit for purpose. In Conference proceedings, 32nd Annual Conference of the International Association for Educational Assessment (pp. 1–16). Singapore: International Association for Educational Assessment.Google Scholar
  20. McMahon, S., & Jones, I. (2015). A comparative judgement approach to teacher assessment. Assessment in Education: Principles, Policy & Practice, 22(3), 368–338. doi:10.1080/0969594X.2014.978839.CrossRefGoogle Scholar
  21. Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23.CrossRefGoogle Scholar
  22. Newhouse, C. P. (2013). Using digital technologies to improve the authenticity of performance assessment for high-stakes purposes. Technology, Pedagogy and Education, 24(1), 17–33. doi:10.1080/1475939X.2013.851031.CrossRefGoogle Scholar
  23. Newhouse, C. P. (2014). Using digital representations of practical production work for summative assessment. Assessment in Education: Principles, Policy & Practice, 21(2), 205–220. doi:10.1080/0969594X.2013.868341.CrossRefGoogle Scholar
  24. Newhouse, C. P., & Tarricone, P. (2014). Digitizing practical production work for high-stakes assessments. Canadian Journal of Learning and Technology, 40(2), 1–17.Google Scholar
  25. Oldfield, A., Broadfoot, P., Sutherland, R., & Timmis, S. (2012). Assessment in a digital age: A research review. Bristol, UK: Stellar, University of Bristol. Retrieved January 14, 2014, from http://www.bris.ac.uk/education/research/sites/tea/publications/researchreview.pdf
  26. Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.Google Scholar
  27. Pellegrino, J. W., & Quellmalz, E. S. (2011). Perspectives on the integration of technology and assessment. Journal of Research on Technology in Education, 43(2), 119–134.CrossRefGoogle Scholar
  28. Pollitt, A., & Crisp, V. (2004). Could comparative judgements of script quality replace traditional marking and improve the validity of exam questions? Paper presented at the British Educational Research Association Annual Conference, UMIST, Manchester, September 2004.Google Scholar
  29. Redecker, C., & Johannessen, O. (2013). Changing assessment – Towards a new assessment paradigm using ICT. European Journal of Education, 48(1), 79–96.CrossRefGoogle Scholar
  30. Ridgway, J., McCusker, S., & Pead, D. (2004). Literature review of E-assessment. Bristol, UK: NESTA Futurelab. Retrieved March 2, 2016, from http://hal.archives-ouvertes.fr/docs/00/19/04/40/PDF/ridgway-j-2004-r10.pdf
  31. Sadler, D. R. (2009). Transforming holistic assessment and grading into a vehicle for complex learning. In G. Joughin (Ed.), Assessment, learning and judgement in higher education (pp. 45–64). New York: Springer Science + Business Media.Google Scholar
  32. SBS News. (2016, 12 January, 8pm). Government flags expansion of children’s language-app trial. News 12 January. Retrieved January 13, 2016, from http://www.sbs.com.au/news/article/2016/01/12/government-flags-expansion-childrens-language-app-trial
  33. Shute, V. J. (2011). Stealth assessment in computer-based games to support learning. In S. Tobias & J. D. Fletcher (Eds.), Computer games and instruction (pp. 503–523). Charlotte, NC: Information Age Publishing.Google Scholar
  34. Stobart, G. (2008). Testing times, the uses and abuses of assessment. Abingdon, UK: Routledge.Google Scholar
  35. Stobart, G., & Eggen, T. (2012). High-stakes testing – Value, fairness and consequences. Assessment in Education: Principles, Policy & Practice, 19(1), 1–6.CrossRefGoogle Scholar
  36. Strakova, J., & Simonová, J. (2013). Assessment in the school systems of the Czech Republic. Assessment in Education: Principles, Policy & Practice, 20(4), 470–490. doi:10.1080/0969594X.2013.787970.CrossRefGoogle Scholar
  37. Taylor, A. R. (2005). A future in the process of arrival: Using computer technologies for the assessment of student learning. Kelowna, BC: Society for the Advancement of Excellence in Education.Google Scholar
  38. Thompson, G., & Harbaugh, A. G. (2013). A preliminary analysis of teacher perceptions of the effects of NAPLAN on pedagogy and curriculum. Australian Educational Researcher, 40, 299–314. doi:10.1007/s13384-013-0093-0.CrossRefGoogle Scholar
  39. Tsis, J., Whitehouse, G., Maughan, S., & Burdett, N. (2013). A review of literature on marking reliability research. Slough, UK: National Foundation for Educational Research.Google Scholar
  40. Tveit, S. (2013). Educational assessment in Norway. Assessment in Education: Principles, Policy & Practice. doi:10.1080/0969594X.2013.830079.Google Scholar
  41. U.S. Department of Education. (2013). Expanding evidence approaches for learning in a digital world. Washington, DC: U.S. Department of Education, Office of Educational Technology.Google Scholar
  42. Williams, P. J., & Newhouse, C. P. (Eds.). (2013). Digital representations of student performance for assessment. Rotterdam, The Netherlands: Sense Publishers.Google Scholar
  43. Wilson, M. (2004). Assessment, accountability and the classroom: A community of judgement. In M. Wilson (Ed.), Towards coherence between classroom assessment and accountability (pp. 1–19). Chicago, IL: University of Chicago Press.Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Centre for Schooling and Learning Technologies (CSaLT)School of Education, Edith Cowan UniversityPerthAustralia

Personalised recommendations