Abstract
Administering multiple-choice questions with correction for guessing fails to take into account partial knowledge and may introduce a bias as examinees may differ in risk-taking to guess the correct answer when not having full knowledge. In the latter case, elimination scoring gives examinees the opportunity to express their partial knowledge as this alternative scoring procedure requires examinees to eliminate all the response alternatives they consider to be incorrect. The current simulation study investigates how these two scoring procedures affect response behaviors of examinees who differ not only in ability but also in their attitude toward risk. Combining a psychometric model accounting for ability and item difficulty with the decision theory accounting for individual differences in risk aversion, a two-step response-generating model is proposed to predict the expected answering patterns on given multiple-choice questions. The results of the simulations show that overall there are no substantial differences in the answering patterns for examinees at both ends of the ability continuum under two scoring procedures, suggesting that ability has a predominant effect on the response patterns. Compared to correction for guessing, elimination scoring leads to fewer full score response and more demonstration of partial knowledge, especially for examinees with intermediate success probabilities on the items. Only for those examinees, risk aversion has a decisive impact on the expected answering patterns.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
a and β can take different values, but in many studies they are often set to be equal (see Budescu and Bo 2015).
References
Arnold, J. C., & Arnold, P. L. (1970). On scoring multiple choice exams allowing for partial knowledge. The Journal of Experimental Education, 39, 8–13. https://doi.org/10.1080/00220973.1970.11011223.
Ben-Simon, A., Budescu, D. V., & Nevo, B. (1997). A comparative study of measures of partial knowledge in multiple-choice tests. Applied Psychological Measurement, 21, 65–88. https://doi.org/10.1177/0146621697211006.
Bereby-Meyer, Y., Meyer, J., & Flascher, O. M. (2002). Prospect theory analysis of guessing in multiple choice tests. Journal of Behavioral Decision Making, 15, 313–327. https://doi.org/10.1002/bdm.417.
Bond, A. E., Bodger, O., Skibinski, D. O. F., Jones, D. H., Restall, C. J., Dudley, E., et al. (2013). Negatively-marked MCQ assessments that reward partial knowledge do not introduce gender bias yet increase student performance and satisfaction and reduce anxiety. PLoS ONE, 8. https://doi.org/10.1371/journal.pone.0055956.
Budescu, D. V., & Bo, Y. (2015). Analyzing test-taking behavior: Decision theory meets psychometric theory. Psychometrika, 80, 1105–1122. https://doi.org/10.1007/s11336-014-9425-x.
Coombs, C. H., Milholland, J. E., & Womer, F. B. (1956). The assessment of partial knowledge. Educational and Psychological Measurement, 16, 13–37. https://doi.org/10.1177/001316445601600102.
De Laet, T., Vanderoost, J., Callens, R., & Janssen, R. (September 2016). Assessing engineering students with multiple choice exams: Theoretical and empirical analysis of scoring methods. Paper presented at the 44th annual SEFI Conference. Tampere, Finland.
De Laet, T., Vanderoost, J., Callens, R., & Vandewalle, J. (June 2015). How to remove the gender bias in multiple choice assessments in engineering education? Paper presented at the 43rd annual SEFI conference. Orléans, France.
Frary, R. B. (1988). Formula scoring of multiple-choice tests (correction for guessing). Educational Measurement: Issues and Practice, 7, 33–38. https://doi.org/10.1111/j.1745-3992.1988.tb00434.x.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263–292. https://doi.org/10.2307/1914185.
Lesage, E., Valcke, M., & Sabbe, E. (2013). Scoring methods for multiple choice assessment in higher education–Is it still a matter of number right scoring or negative marking? Studies in Educational Evaluation, 39, 188–193. https://doi.org/10.1016/j.stueduc.2013.07.001.
Lindquist, E. F., & Hoover, H. D. (2015). Some notes on corrections for guessing and related problems, 34, 15–19.
SAT Suit of Assessments. (n.d.). How SAT is scored. Retrieved from https://collegereadiness.collegeboard.org/sat/scores/how-sat-is-scored.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Wu, Q., De Laet, T., Janssen, R. (2018). Elimination Scoring Versus Correction for Guessing: A Simulation Study. In: Wiberg, M., Culpepper, S., Janssen, R., González, J., Molenaar, D. (eds) Quantitative Psychology. IMPS 2017. Springer Proceedings in Mathematics & Statistics, vol 233. Springer, Cham. https://doi.org/10.1007/978-3-319-77249-3_16
Download citation
DOI: https://doi.org/10.1007/978-3-319-77249-3_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-77248-6
Online ISBN: 978-3-319-77249-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)