Skip to main content
  • 269 Accesses

Abstract

This chapter describes the TIMSS assessment which was designed to measure mathematics and science abilities at the end of primary education and at the beginning of secondary education. As a curriculum-based assessment, TIMSS assesses students in content and cognitive domains in the form of multiple-choice as well as open-ended items. The chapter describes the study’s background questionnaires and the grade-based, matrix-sampling design. It explains the TIMSS scoring where Item Response Theory (IRT) is used to analyze the test data and to assign achievement scores to students. The format of the data and the procedures used make it necessary to use sophisticated methods in the analysis. The chapter also presents the IEA International Database Analyzer (IDB), a tool which helps researchers calculate statistics and standard errors of differences. The deliberations also explain that the focus is on the mathematics scores because the influence of language on achievement is low, and mathematics classes are in most countries used in the sampling. The chapter also helps to give meaning to differences in score points.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    It must be mentioned here that IEA defined a country in terms of an educational system. For example, the French and the Flemish parts of Belgium have different curricula and educational policies. Consequently from the beginning of IEA, both entities became separate IEA members. Results from the Flemish and the French part of Belgium are always reported separately in all IEA reports. The same is true for England, Ireland, and Scotland.

  2. 2.

    There is a difference to the approach used in OECD PISA where the assessment is set up by experts independent of countries’ curricula. In that sense, PISA is normative and defines a learning goal that should be reached by students at the age of 15 to be successful in their economic life (OECD, 1999).

  3. 3.

    In comparison, the OECD PISA study applies an age-based sample and samples 15-year-old students in each participating country independent of the grade or even ISCED level that the students are enrolled in. For comparing school level factors on achievement, the grade-based approach seems to be beneficial since the grade distribution of groups of students with different achievement levels differs significantly in some countries. This is probably caused by the policies in some countries of having students that do not reach a certain ability level repeat a grade. For the immigrant population, another effect might cause that students are enrolled in grades lower than the median grade level in the countries, which is the change from one educational system into another with additional potential language problems. This problem will be discussed in more detail in Chapter 4B.

  4. 4.

    This has become more common in later cycles of TIMSS since more countries were aware of the streaming that takes place within schools. This means that students are grouped in different classes by ability levels of the students. This results in significantly higher achieving and lower achieving classes in the schools concerned. To reduce variances between the sampled schools because of different ability classes being selected and to disentangle school and class effects, an increasing number of participating countries have chosen to select more than one class per school.

  5. 5.

    This is also the reason why it is not recommended to use the TIMSS test to analyze individual students or to give feedback to individual students or very small groups of students (see, e.g., Chapter 3.2.1 in (Mirazchiyski, 2013)).

  6. 6.

    One caveat here is that the correlations shown in Table 3.2 are based on grade four student data, whereas it is data from grade four students which is mainly analyzed in this study.

References

  • Beaton, A. E., Mullis, I. V. S., Martin, M. O., Gonzalez, E. J., Kelly, D. L., & Smith, T. A. (1996). Mathematics achievement in the middle school years: IEA’s third international mathematics and science report. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education.

    Google Scholar 

  • Bock, R. D., & Aitken, M. (1981). Marginal maximum likelihood estimation of item parameters: An application of the EM algorithm. Psychometrika, 46, 443–459.

    Article  Google Scholar 

  • Brandt, S. (2008). Estimation of a RASCH model including subdimensions. IERI Monograph Series, 1, 51–70.

    Google Scholar 

  • Foy, P., & Olson, J. F. (Eds.). (2009). TIMSS 2007 international database and user guide (DVD). Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College.

    Google Scholar 

  • IEA. (2011). Brief history of IEA: 50 years of educational research. Retrieved from http://www.iea.nl/brief_history.html

  • Martin, M. O., & Kelly, D. (Eds.). (1997a). Third International Mathematics and Science Study technical report Volume II: Implementation and analysis. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College.

    Google Scholar 

  • Martin, M. O., & Kelly, D. (1997b). TIMSS technical report Volume I: Design and development. Boston, MA: Boston College.

    Google Scholar 

  • Martin, M. O., Mullis, I. V. S., & Chrostowski, S. J. (Eds.). (2004). TIMSS 2003 technical report. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College.

    Google Scholar 

  • Masters, G. N. (1982). A Rasch model for partial credit scoring. Psychometrika, 47(2), 149–174.

    Article  Google Scholar 

  • Mirazchiyski, P. (2013). Providing school-level reports from international large-scale assessments: Methodological considerations, limitations, and possible solutions (IEA research report). Amsterdam, The Netherlands: IEA.

    Google Scholar 

  • Mullis, I. V. S., & Martin, M. O. (2008). TIMSS 2007 encyclopedia. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College.

    Google Scholar 

  • Mullis, I. V. S., Martin, M. O., Ruddock, G. J., O’Sullivan, C. Y., Arora, A., & Erberber, E. (2005). TIMSS 2007 assessment frameworks. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College.

    Google Scholar 

  • Mullis, I. V. S., Martin, M. O., Ruddock, G. J., O’Sullivan, C. Y., & Preuschhoff, C. (2009). TIMSS 2011 assessment frameworks. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College.

    Google Scholar 

  • OECD. (1999). Measuring student knowledge and skills: A framework for assessment. Paris, France: OECD Publishing. Retrieved from http://www.oecd.org/edu/school/programmeforinternationalstudentassessmentpisa/33693997.pdf

  • OECD. (2013a). Key skills and economic and social well-being. In OECD skills outlook 2013 (pp. 223–248). Paris, France: OECD Publishing. Retrieved from http://www.oecd-ilibrary.org/education/oecd-skills-outlook-2013/how-key-information-processing-skills-translate-into-better-economic-and-social-outcomes_9789264204256-10-en

  • Olson, J. F., Mullis, I. V. S., & Martin, M. O. (2008). TIMSS 2007 technical report. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College.

    Google Scholar 

  • Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen, Denmark: Danish Institute for Educational Research.

    Google Scholar 

  • Rubin, D. B. (2009). Multiple imputation for non-response in surveys. New York, NY: John Wiley & Sons.

    Google Scholar 

  • von Davier, M., Gonzalez, E., & Mislevy, R. J. (2009). What are plausible values and why are they useful? IERI Monograph Series, 2, 9–36.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Hastedt, D. (2016). Data and Methods. In: Mathematics Achievement of Immigrant Students. Springer, Cham. https://doi.org/10.1007/978-3-319-29311-0_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-29311-0_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-29310-3

  • Online ISBN: 978-3-319-29311-0

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics