Abstract
This study aims at modeling the self-reported test-taking motivation items in PISA and TIMSS Advanced studies for Swedish students using an IRT approach. In the last two cycles of the assessments, six test-specific items were included in the Swedish student questionnaires to evaluate pupil’s effort, motivation and how they perceived the importance of the tests. Using a Multiple-Group Generalized Partial Credit model (MG-GPCM), we created an IRT motivation scale for each assessment. We also investigated measurement invariance for the two cycles of PISA (i.e., 2012 and 2015) and of TIMSS Advanced (i.e., 2008 and 2015). Results indicated that the proposed scales refer to unidimensional constructs and measure reliably students’ motivation (Cronbach’s alpha above 0.78). Differential item functioning across assessment cycles was restricted to two criteria (RMSD and DSF) and had more impact on the latent motivation scale for PISA than for TIMSS Advanced. Overall, the test-taking motivation items fit well the purpose of a diagnostic of test-taking motivation in these two surveys and the proposed scales highlighted the slight increase of pupils’ motivation across the assessment cycles.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Brown, T. A. (2014). Confirmatory factor analysis for applied research. New York: Guilford.
Buuren, S. V., & Groothuis-Oudshoorn, K. (2010). Mice: Multivariate imputation by chained equations in R. Journal of Statistical Software, 45(3), 1–68.
Chalmers, R. P. (2012). Mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1–29.
Eklöf, H., & Nyroos, M. (2013). Pupil perceptions of national tests in science: Perceived importance, invested effort, and test anxiety. European Journal of Psychology of Education, 28(2), 497–510.
Kiefer, T., Robitzsch, A., & Wu, M. (2015). Tam: Test analysis modules. R Package.
Muraki, E. (1999). Stepwise analysis of differential item functioning based on multiple-group partial credit model. Journal of Educational Measurement, 36(3), 217–232.
OECD. (2017). PISA 2015 Technical Report. Paris: OECD Publishing. https://doi.org/10.1787/9789264255425-en.
Penfield, R. D. (2005). Difas: Differential item functioning analysis system. Applied Psychological Measurement, 29(2), 150–151.
Penfield, R. D. (2008). Three classes of nonparametric differential step functioning effect estimators. Applied Psychological Measurement, 32(6), 480–501.
Revelle, W. (2014). Psych: Procedures for psychological, psychometric, and personality research. R Package.
Wagemaker, H. (2013). International large-scale assessments: From research to policy. In L. Rutkowski, M. von Davier, & D. Rutkowski D (Eds.), Handbook of international large-scale assessment: Background, technical issues, and methods of data analysis (pp. 11–35). New York: Chapman Hall/CRC.
Warm, T. A. (1989). Weighted likelihood estimation of ability in item response theory. Psychometrika, 54(3), 427–450.
Wigfield, A., & Eccles, J. S. (2000). Expectancy-value theory of achievement motivation. Contemporary Educational Psychology, 25(1), 68–81.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Reis Costa, D., Eklöf, H. (2019). IRT Scales for Self-reported Test-Taking Motivation of Swedish Students in International Surveys. In: Wiberg, M., Culpepper, S., Janssen, R., González, J., Molenaar, D. (eds) Quantitative Psychology. IMPS IMPS 2017 2018. Springer Proceedings in Mathematics & Statistics, vol 265. Springer, Cham. https://doi.org/10.1007/978-3-030-01310-3_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-01310-3_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01309-7
Online ISBN: 978-3-030-01310-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)