Advertisement

International Conference on Theory and Application of Diagrams

Diagrams 2014: Diagrammatic Representation and Inference pp 71-77 | Cite as

Item Differential in Computer Based and Paper Based Versions of a High Stakes Tertiary Entrance Test: Diagrams and the Problem of Annotation

  • Brad Jackel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8578)

Abstract

This paper presents the results from a tertiary entrance test that was delivered to two groups of candidates, one as a paper based test and the other as a computer based test. Item level differential reveals a pattern that appears related to item type: questions based on diagrammatic stimulus show a pattern of increased difficulty when delivered on computer. Differential in performance was not present in other sections of the test and it would appear unlikely to be explained by demographic differences between the groups. It is suggested this differential is due to the inability of the candidates to freely annotate on the stimulus when delivered on computer screen. More work needs to be done on considering the role of annotation as a problem solving strategy in high-stakes testing, in particular with certain kinds of stimulus, such as diagrams.

Keywords

Diagrams computer based assessment paper based assessment high-stakes tests standardized test annotation diagrammatic assessment NAPLAN 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    McDonald, A.: The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessments. Computers & Education 39, 299–312 (2002)CrossRefGoogle Scholar
  2. 2.
  3. 3.
    van der Kleij, F.M., Eggen, T.J.H.M., Timmers, C.F., Veldkamp, B.P.: Effects of feedback in a computer-based assessment for learning. Computers & Education 58, 263–272 (2012)CrossRefGoogle Scholar
  4. 4.
    Noyes, J.M., Garland, K.J.: Computer- vs. paper-based tasks: Are they equivalent? Ergonomics 51, 1352–1375 (2008)CrossRefGoogle Scholar
  5. 5.
    Way, W.D., Lin, C.-H., Kong, J.: Maintaining score equivalence as tests transition online: Issues, approaches and trends. Annual meeting of the National Council on Measurement in Education, New York, NY (2008)Google Scholar
  6. 6.
    Leeson, H.V.: The Mode Effect: A Literature Review of Human and Technological Issues in Computerized Testing. International Journal of Testing 6, 1–24 (2006)CrossRefGoogle Scholar
  7. 7.
    Ainsworth, S., Prain, V., Tytler, R.: Drawing to learn in science. Representations 3, 5 (2011)Google Scholar
  8. 8.
    Marshall, C.C.: Annotation: from paper books to the digital library. In: Proceedings of the Second ACM International Conference on Digital Libraries, pp. 131–140. ACM (1997)Google Scholar
  9. 9.
    Plimmer, B., Apperley, M.: Making paperless work. In: Proceedings of the 8th ACM SIGCHI New Zealand Chapter’s International Conference on Computer-Human Interaction: Design Centered HCI, pp. 1–8. ACM (2007)Google Scholar
  10. 10.
    Tversky, B.: What do sketches say about thinking. In: 2002 AAAI Spring Symposium, Sketch Understanding Workshop, Stanford University, AAAI Technical Report SS-02-08. pp. 148–151 (2002)Google Scholar
  11. 11.
  12. 12.
    Greiff, S., Wüstenberg, S., Holt, D.V., Goldhammer, F., Funke, J.: Computer-based assessment of Complex Problem Solving: concept, implementation, and application. Educational Technology Research and Development 61, 407–421 (2013)CrossRefGoogle Scholar
  13. 13.
    Saadé, R., Morin, D., Thomas, J.: Critical thinking in E-learning environments. Computers in Human Behavior 28, 1608–1617 (2012)CrossRefGoogle Scholar
  14. 14.
    Terzis, V., Moridis, C.N., Economides, A.A.: How student’s personality traits affect Computer Based Assessment Acceptance: Integrating {BFI} with {CBAAM}. Computers in Human Behavior 28, 1985–1996 (2012)CrossRefGoogle Scholar
  15. 15.
    Keng, L., McClarty, K.L., Davis, L.L.: Item-Level Comparative Analysis of Online and Paper Administrations of the Texas Assessment of Knowledge and Skills. Applied Measurement in Education 21, 207–226 (2008)CrossRefGoogle Scholar
  16. 16.
    Marshall, C.C.: Reading and interactivity in the digital library: Creating an experience that transcends paper. In: Proceedings of CLIR/Kanazawa Institute of Technology Roundtable, pp. 1–20 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Brad Jackel
    • 1
  1. 1.Australian Council for Educational Research (ACER)CamberwellAustralia

Personalised recommendations