Advertisement

Shift Error Detection in Standardized Exams

(Extended Abstract)
  • Steven Skiena
  • Pavel Sumazin
Conference paper
  • 405 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1848)

Abstract

Computer-graded multiple choice examinations are a familiar and dreaded part of most student’s lives. Many test takers are particularly fearful of form-filling shift errors, where absent-mindedly marking the answer to (say) question 32 in position 31 causes a long run of answers to be successively displaced. Test-taking strategies where students answer questions out of sequence (such as answering easy questions first) seem particularly likely to cause unrecognized shift errors. Such errors can result in many correct answers being marked wrong, and significantly lower the exam taker’s score.

In this paper, we consider the question of whether these shift errors can be accurately recognized and corrected for. Our results suggest that students are right to fear such errors, and that a non-trivial fraction of multiple-choice exams appear to contain significant shift errors. In particular, we found evidence that 1%–2% of multiple-choice exam papers at SUNY Stony Brook contain likely shift errors, each causing the loss of about 10% of the student's grade. Given the importance of standardized examinations, clerical mistakes should not be allowed to have such an impact on the student's score. If our observed shift error rate holds across the millions of examinations given annually, this is a serious but yet unstudied problem. Indeed, based on the preliminary results in the paper, we have begun working with The College Board to study the prevalence of uncorrected shift-errors in SAT examinations.

Keywords

Test Taker Shift Error Standardize Exam Clerical Error Educational Test Service 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    M. G. Aamodt and T. Mcshane. A meta-analytic investigation of the effect of various test item characteristics on test scores and test completion times. Public Personnel Management, 21:151–160, 1992.Google Scholar
  2. [2]
    W. R. Balch. Item order affects performance on multiple choice exams. Teaching of Psychology, 16:75–77, 1989.CrossRefGoogle Scholar
  3. [3]
    J. A. Bath. Answer-changing behavior on objective examinations. Journal of Educational Research, 61:105–107, 1967.Google Scholar
  4. [4]
    M. D. Beck. The effect of item response changes on scores on an elementary reading achievement test. Journal of Educational Research, 71:153–156, 1976.Google Scholar
  5. [5]
    J. L. Carlson and A. L. Ostrosky. Item sequence and student performance on multiple choice exams-further evidence. Journal of Economic Education, 23:232–235, 1992.CrossRefGoogle Scholar
  6. [6]
    C. A. Clark. Should students change answers on objective tests? Chicago Schools Journal, 43:382–385, 1962.Google Scholar
  7. [7]
    D. A. Copeland. Should chemistry students change answers on multiple-choice tests? Journal chemical Education, 49:258, 1972.Google Scholar
  8. [8]
    V. Dancik. Expected length of Longest Common Subsequences. PhD thesis, University of Warwik, Warwick, UK, 1994.Google Scholar
  9. [9]
    K. A. Edwards and C. Marshall. First impressions on tests: Some new findings. Teaching Psychology, 4:193–195, 1977.CrossRefGoogle Scholar
  10. [10]
    D. N. Harpp and J. J. Hogan. Crime in the classroom-detection and prevention of cheating on multiple choice exams. Journal of Chemical Education, 70:306–311, 1993.CrossRefGoogle Scholar
  11. [11]
    R. Hertzberg. Vice president of marketing research, scantron. Personal communication, September 1999.Google Scholar
  12. [12]
    G. E. Hill. The effect of changed responses in true-false tests. Journal of Educational Psychology, 28:308–310, 1937.CrossRefGoogle Scholar
  13. [13]
    NCS Inc. Annual report. World Wide Web, http://www.ncs.edu, September 1999.
  14. [14]
    S. S. Jacobs. Answer changing on objective tests: Some implications for test validity. Educational and Psychology Measurement, 32:1039–1044, 1972.Google Scholar
  15. [15]
    R. F Jarrett. The extra-chance nature of changes in students’ responses to objective test items. Journal of General Psychology, 38:243–250, 1948.CrossRefGoogle Scholar
  16. [16]
    E. E. Lamson. What happens when the second judgment is recorded in a true-false test? Journal of Educational Psychology, 26:223–227, 1935.CrossRefGoogle Scholar
  17. [17]
    M. L. Lowe and Crawford. First impressions versus second thought in true-false tests. Journal of Educational psychology, 20:192–195, 1929.CrossRefGoogle Scholar
  18. [18]
    D. O. Lynch and Smith. Item response changes: Effects on test scores. Measurement and Evaluation in Guidance, 7:220–224, 1975.Google Scholar
  19. [19]
    C. O. Mathews. Erroneous first impressions on objective tests. Journal of Educational Psychology, 20:260–286, 1929.Google Scholar
  20. [20]
    M. K. Matter. The relationship between achievement test response changes, ethnicity, and family income. PhD thesis, University of Texas, Austin, 1985.Google Scholar
  21. [21]
    D. J. Mueller and A. Shuedel. Some correlates on net gain resultant from answer changing on objective achievement test items. Journal of Educational Measurement, 12:251–254, 1975.CrossRefGoogle Scholar
  22. [22]
    P. J. Pascale. Changing initial answers on multiple choice achievement tests. Measurement and Evaluation in Guidance, 6:236–238, 1974.Google Scholar
  23. [23]
    G. Prieto and A. R. Delgado. The effect of instructions on multiple-choice test scores. European Journal of Psychological assessment, 15:143–150, 1999.CrossRefGoogle Scholar
  24. [24]
    P. J. Reile and L. J. Briggs. Should students change their initial answers on objective type tests? more evidence regarding an old problem. Journal of Educational Psychology, 43:110–115, 1952.CrossRefGoogle Scholar
  25. [25]
    E. Reiling and R. Taylor. A new approach to the problem of changing initial responses to multiple choice questions. Journal of Educational Measurement, 9:67–70, 1972.CrossRefGoogle Scholar
  26. [26]
    R. C. Sinclair, A. S. Soldat, and M. M. Mark. Effective cues and processing strategy: Color-coded examination forms influence performance. Teaching of Psychology, 25:130–132, 1998.CrossRefGoogle Scholar
  27. [27]
    A. Smith and J. C. Moore. The effects of changing answers on scores on non-test-sophisticated examinees. Measurement and Evaluation in Guidance, 8:252–254, 1976.Google Scholar
  28. [28]
    M. Smith, K. P. white, and R. H. Coop. The effect of item type on the consequences of changing answers on multiple choice tests. Journal of Educational Measurement, 16:203–208, 1979.CrossRefGoogle Scholar
  29. [29]
    E. W. Stark and G. Pemmasani. Implementation of a compositional performance analysis algorithm for probabilistic i/o automata. PAPM99, September 1999.Google Scholar
  30. [30]
    G. R. Stoffer, K. E. Davis, and J. B. Brown. The consequences of changing initial answers on objective tests: A stable effect and a stable misconception. Journal of Educational research, 70:272–277, 1977.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Steven Skiena
    • 1
  • Pavel Sumazin
    • 1
  1. 1.State University of New York at Stony BrookStony Brook

Personalised recommendations