Skip to main content

Assessment Literacy of Singapore Chinese Language Teachers in Primary and Secondary Schools

  • Chapter
  • First Online:
Book cover Teaching Chinese Language in Singapore

Abstract

Language assessment literacy refers to language teachers’ knowledge of measurement principles, practices, and applications to classroom assessment, especially to language assessment issues. The study reports on Singapore primary and secondary Chinese Language teachers’ knowledge of language assessment. The survey involved 323 primary and secondary teachers who responded to the Assessment Literacy Scale prepared by the researchers. Results at the subtest and item levels revealed the teachers’ shortfalls in assessment literacy and indicated their training needs. The respondents were able to correctly answer general questions of commonsensical nature but were weak where specific technical knowledge was concerned. Findings from this study have implications for primary and secondary Chinese Language teachers’ training in assessment literacy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Bachman, L. F. (1991). What does language testing have to offer? TESOL Quarterly, 25(4), 671–704.

    Article  Google Scholar 

  • Berger, A. (2012). Creating language-assessment literacy: A model for teacher education. In J. Httner, B. Mehlmauer-Larcher, S. Reichl, & B. Schiftner (Eds.), Theory and practice in EFL teacher education (pp. 57–82). Bristol: Multilingual Matters.

    Google Scholar 

  • Boyles, P. (2005). Assessment literacy. In M. Rosenbusch (Ed.), National assessment summit papers (pp. 11–15). Ames: Iowa State University.

    Google Scholar 

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Fulcher, G. (2012). Language literacy for the language classroom. Language Assessment Quarterly, 9, 113–132.

    Article  Google Scholar 

  • Gipps, C. (1994). Beyond testing: Towards a theory of educational assessment. London: Falmer Press.

    Google Scholar 

  • Hopkins, K. D. (1998). Educational and psychological measurement and evaluation. Needham Heights: Allyn & Bacon.

    Google Scholar 

  • Inbar-Lourie, O. (2008). Constructing a language assessment knowledge base: A focus on language course. Language Testing, 25, 385–402.

    Article  Google Scholar 

  • Koh, K. H. (2011). Improving teachers’ assessment literacy through professional development. Teaching Education, 22(2), 255–276.

    Article  Google Scholar 

  • Kunnan, A. J., & Zhang, L. M. (2015). Responsibility in language assessment. In H. Yang (Ed.), The sociology of language testing (pp. 211–231). Shanghai: Shanghai Foreign Language Press.

    Google Scholar 

  • Linn, R. L., & Miller, M. D. (2005). Measurement and assessment in teaching (9th ed.). Upper Saddle River: Pearson Education Inc.

    Google Scholar 

  • Malone, M. (2008). Training in language assessment. In E. Shohamy & N. H. Hornberger (Eds.), Encyclopedia of language and education (Language testing and assessment, Vol. 7, pp. 273–284). New York: Springer.

    Google Scholar 

  • Malone, M. (2013). The essentials of assessment literacy: Contrasts between testers and users. Language Testing, 30, 329–344.

    Google Scholar 

  • Mertler, C. A. (2005, April). Measuring teachers’ knowledge and application of classroom assessment concepts: Development of the assessment literacy inventory. Paper presented at the annual meeting of the American Educational Research Association. Montreal, Quebec, pp. 11–15.

    Google Scholar 

  • Pike, C. K, & Hudson, W. W. (No date). Reliability and measurement error in the presence of homogeneity. https://scholarworks.iupui.edu/bitstream/handle/1805/2325/Pike_Reliability_Measurement.pdf?sequence=1

  • Plake, B., & Impara, J. C. (1993). Assessment competencies of teachers: A national survey. Educational Measurement: Issues and Practice, 12(4), 10–12.

    Article  Google Scholar 

  • Sijtsma, K. (2009). On the use, the misuse, and the very limited usefulness of Cronbach’s alpha. Psychometrika, 7(4), 107–120. doi:10.1007/s11336-008-9101-0.

    Article  Google Scholar 

  • Spolsky, B. (1978). Introduction: Linguists and language testers. In B. Spolsky (Ed.), Approaches to language testing: Advances in language testing series (Vol. 2, pp. V–X). Arlington: Center for Applied Linguistics.

    Google Scholar 

  • Spolsky, B. (1995). Measured words: The development of objective language testing. Oxford: Oxford University Press.

    Google Scholar 

  • Stiggins, R. J. (1991). Assessment literacy. Phi Delta Kappan, 72(7), 534–539.

    Google Scholar 

  • Stiggins, R. (2001). Student involved classroom assessment (3rd ed.). Columbus: Merrill Publishing.

    Google Scholar 

  • Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83(10), 758–765.

    Article  Google Scholar 

  • Taylor, L. (2009). Developing assessment literacy. Annual Review of Applied Linguistics, 29, 21–36.

    Article  Google Scholar 

  • The survey system. (2012). Sample size calculator. http://www.surveysystem.com/sscalc.htm

  • Witte, R. H. (2010). Assessment literacy in today’s classroom. Classroom assessment resource center. Retrieved April 17, 2015, from www.education.com/reference/article/assessment-literacy-todays-classroom/

  • Wolf, D., Bixby, J., Glenn, J., & Gardner, H. (1991). To use their minds well: Investigating new forms of student assessment. Review of Research in Education, 17, 31–125.

    Google Scholar 

Download references

Acknowledgements

Thanks are due to Mr. Lim Yee Ping, Master Teachers, Ministry of Education for his valuable assistance in data collection and Miss Lee Yi-tian for her efficient assistance in data processing. The cooperation of the teachers in completing the survey is much appreciated.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Limei Zhang .

Editor information

Editors and Affiliations

Appendix: Methodological Notes on Reliability and Validity

Appendix: Methodological Notes on Reliability and Validity

A report of this survey is incomplete without a discussion on two key concepts of measurement, reliability, and validity.

The conventional method of assessing score reliability is the Cronbach’s alpha coefficient which indicates the degree of internal consistency among the items, with the assumption that the items are homogeneous and the sample is heterogeneous. The 40 items of the Assessment Literacy Scale crafted for this study are scored 1 (right) or 0 (wrong).

Richardson Formula 20 (KR20), which is an equivalent of Cronbach’s alpha but for dichotomous items, was used.

Table 6.6 below shows the KR20 reliabilities for the four subscales and the scale as a whole. It is clear that the KR20 reliabilities are disappointingly low by the conventional expectation and this may lead to the question of trustworthiness of the survey results.

Table 6.6 KR20 reliabilities

However, there have been criticisms, among others, on Cronbach’s alpha as a measure of item homogeneity or unidimensionality. For example, Sijtsma (2009: para. 4.2) commented:

There is no clear and unambiguous relationship between alpha and the internal structure of a test. This can be demonstrated in a simple way. First, it is shown that a 1-factor test may have any alpha value. Thus, it may be concluded that the value of alpha says very little if anything about unidimensionality. Second, it is shown that different tests of varying factorial composition may have the same alpha value. Thus, it may be concluded that alpha says very little if anything about multiple-factor item structures.

One factor leading to the low reliabilities as shown in Table 6.6 is the heterogeneous nature of item content among the 40 items of the Assessment Literacy Scale as they cover many different aspects of educational measurement, some qualitative and other quantitative in nature, even within a particular subtest. This being the case, it renders the conventional reliability measures (i.e., Cronbach’s alpha and its equivalent KR20) which assume item homogeneity unsuitable for the purpose of the present study. Thus, the KR20 reliabilities routinely calculated and presented in Table 6.6 are not to be taken seriously.

Another factor contributing to low-score reliability is group homogeneity. Pike and Hudson (No date: para. 1) discussed the limitation of using Cronbach’s alpha to estimate reliability when using a sample with homogeneous responses in the measured construct and described the risk of falsely concluding that a new instrument may have poor reliability and demonstrates the use of an alternate statistic that may serve as a cushion against such errors. The authors recommended the calculation of the relative alpha by considering the ratio between the standard error of measurement (SEM) which itself involves the reliability as shown in the formula; thus,

$$ \mathrm{S}\mathrm{E}\mathrm{M}=\mathrm{S}\mathrm{D}*\mathrm{SQRT}\left(1 - \mathrm{reliability}\right) $$

The relative alpha which can take a value between 0.0 and 1.0 indicates the extent to which the scores can be trusted, in a sense, an alternative way to evaluate score reliability. The formula is

$$ \mathrm{Relative}\ \mathrm{Alpha}=1\hbox{--}\ {\mathrm{SEM}}^2/\ {\left(\mathrm{Range}/6\right)}^2 $$

In this formula, SEM is the usual indicator of the lack of trustworthiness of the obtained scores and, under normal circumstances, the scores for a scale will theoretically span over six standard deviations. Thus, the second term on the right is an indication of the proportion of the test variance that is unreliable. With these, relative alpha indicates the proportion of test variance offset for its unreliable portion, i.e., the proportion of test variance which can be trusted.

In the present study, the maximum possible score for the Assessment Literacy Scale is 40 and the theoretically possible standard deviation is 6.67 = 40/6. However, the actual data yields for the Scale as a whole standard deviations of 4.24 (primary) and 4.66 (secondary), which are 0.64 and 0.70, respectively, of the theoretical standard deviations. In other words, the two groups are found to be more homogeneous than expected.

Table 6.7 shows the relative alphas for the primary and secondary groups of Chinese Language teachers surveyed here. The statistics suggest that much of the test variance has been captured by the 40-item Assessment Literacy Scale and the scores can therefore be trusted.

Table 6.7 Relative alpha coefficients

As for validity, it required information beyond the test scores. Ideally, the criterion scores for validity can come from a test of application of measurement concepts and techniques, but such information is not available within the survey results, although some of the 40 items of the Assessment Literacy Scale are of this type, for instance, those items on statistical concepts. However, indirect evidence of the score validity is provided by the teachers’ responses to the open-ended question asking for comments and suggestions with regard to educational assessment.

For the open-ended question, the primary Chinese teachers made 14 responses and the secondary teachers 22, totally 36 responses. Most of the responses (primary 7 and secondary 12) reflect the teachers’ realization that assessment plays an important role in their teaching for which specialized knowledge is needed. Examples of such responses are shown below:

  • What is taught and what is assessed should be consistent.

  • Teachers need to have knowledge of educational measurement.

  • Need to popularize knowledge of assessment among the school leaders.

  • Hope to gain knowledge of educational measurement so that I can assess with in-depth understanding.

  • Without knowledge of educational measurement, data analysis of results conducted in the school is superficial.

  • Very much needed!

  • Will help improving teaching.

A second type of responses reflect the difficulty the teachers had in understanding items which involve technical concepts and terminologies (primary 4, secondary 7). Such responses are expected in view of the lack of more formal and intensive training in educational assessment. Examples of such responses are shown below:

  • Not familiar with the technical terms.

  • Too many technical, I don’t understand.

  • I don’t understand some of the questions.

  • Many mathematical terms; I don’t understand.

The third type of responses reflect the need of the teachers to be convinced that assessment training is necessary for them to use assessment results properly as part of instruction (primary 3, secondary 3). Examples of such responses are shown below:

  • Can assessment really raise the students’ achievement and attitude? Will it add on the teachers’ work? Really helpful to the students?

  • Does data help in formative assessment?

The responses reaffirm the test-taking attitude of the teachers when responding to the Assessment Literacy Scale. The seriousness with which they completed the survey is clearly evident. The second type of responses corroborates with the finding that they lack relevant specific training in educational assessment and hence found the technical terms and concepts unfamiliar; this truly reflects their position and lack of knowledge. The third type of responses indicates the reservation and inquisitiveness of some of the respondents; this indirectly reflects that they need to be convinced that they need more training in educational measurement.

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media Singapore

About this chapter

Cite this chapter

Zhang, L., Soh, K. (2016). Assessment Literacy of Singapore Chinese Language Teachers in Primary and Secondary Schools. In: Soh, K. (eds) Teaching Chinese Language in Singapore. Springer, Singapore. https://doi.org/10.1007/978-981-10-0123-9_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-0123-9_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-0121-5

  • Online ISBN: 978-981-10-0123-9

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics