Abstract
The first three PISA cycles occurred in 2000, 2003 and 2006 with Literacy Scales in Reading, Mathematics and Science. This chapter explores equating-related issues for the Australian data and considers the implications for Australia’s reported results. Previous published PISA results have employed a common reporting scale across the first three cycles for Reading only. Common scales for all three Literacy Scales were constructed in this chapter. In addition, the item parameters estimated here were based on Australian data only rather than using the international item parameters, as is done in PISA. This allows for an examination of the impact of country differential item functioning (DIF) on the Australian results. Australian PISA trends were explored in terms of the overall shape of the estimated performance distributions. Where applicable, comparisons were made with the published results based on international item parameters. While such comparisons showed several similarities, some differences were also found.
Published Australian Reading distributions reported a decline over the first three cycles in the performance of Australian students located at the top end of the distribution. Using Australian data only, a decline between the first two PISA cycles was found, but remarkably in the bottom 15% of the distribution only. Between cycles 2003 and 2006 an almost constant decline across the whole proficiency distribution was found and not a decline that was limited to the top end of the distribution, as published by the media.
Reported PISA results have a high impact on educational policy. The outcomes of trend analyses may alter with different methods. This investigation examines the impact when Australian country specific and when International item parameters are used to estimate the distributions of Australian PISA performance. This is further explored by equating the first three PISA cycles for each literacy scale. The results reported in this chapter highlight some of the potentially important differences that can occur when using different analyses methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adams, R. J., & Carstensen, C. (2002). Scaling outcomes. In R. J. Adams & M. Wu (Eds.), Programme for international student assessment: PISA 2000 technical report (pp. 149–162). Paris: OECD.
Adams, R. J., Wilson, M. R., & Wang, W. C. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21(1), 1–23.
Adams, R. J., & Wu, M. L. (2002). Programme for international student assessment: PISA 2000 technical report. Paris: OECD.
Adams, R. J., Wu, M. L., & Macaskill, G. (1998). Scaling methodology and procedures for the mathematics and science scales. In M. O. Martin & D. Kelly (Eds.), TIMSS technical report (Implementation and analysis (primary and middle school years), Vol. II, pp. 147–174). Boston: Boston College.
Baker, F. (1984). Ability metric transformations involved in vertical equating under item response theory. Applied Psychological Measurement, 8(3), 261–271.
Buckingham, J. (2008, November 6). Brightest and best miss out. The Australian. Retrived from http:// www.theaustralian.com.au
Gale, T. (2008, August 13). Fair go must no be just a phrase. The Australian, Retrived from http:// www.theaustralian.com.au
Gebhardt, E., & Adams, R. J. (2007). The influence of equating methodology on reported trends in PISA. Journal of Applied Measurement, 8(3), 305–322.
Lord, F.M. (1975). A survey of equating methods based upon item characteristic curve theory (RB-75–13). Princeton NJ: Educational Testing Service.
Lord, F. M. (1980). Applications of item response theory to practical testing problems. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Milburn, C. (2008, September 15). Literacy fall starts with out best. The Age, Education, p. 3.
Mislevy, R. J. (1991). Randomization-based inference about latent variable from complex samples, Psychometrika 56 (pp. 177–196). Greensboro, NC: Psychometric Society.
Mullis, I. V. S., & Martin, M. (1998). Item analysis and review. In M. O. Martin & D. Kelly (Eds.), TIMSS technical report (Implementation and analysis (Primary and middle school years), Vol. II, pp. 111–146). Boston: Boston College.
OECD. (2005). PISA 2003 technical report. Paris: OECD.
OECD. (2007). PISA 2006 science competencies for tomorrow’s world. Paris: OECD.
OECD. (2009). PISA 2006 technical report. Paris: OECD.
Thomson, S., & De Bortoli, L. (2008). Exploring scientific literacy: How Australia measures up, PISA national reports. Camberwell, Australia: Australian Council for Educational Research.
Wu, M. L., Adams, R. J., Wilson, M. R., & Haldane, S. (2007). ACER ConQuest version 2.0 [Computer program]. Camberwell, Australia: ACER Press, Australian Council for Education Research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Urbach, D. (2013). An Investigation of Australian OECD PISA Trend Results. In: Prenzel, M., Kobarg, M., Schöps, K., Rönnebeck, S. (eds) Research on PISA. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-4458-5_10
Download citation
DOI: https://doi.org/10.1007/978-94-007-4458-5_10
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-007-4457-8
Online ISBN: 978-94-007-4458-5
eBook Packages: Humanities, Social Sciences and LawEducation (R0)