Advertisement

Designing and Rating Academic Writing Tests: Implications for Test Specifications

  • Amani MejriEmail author
Chapter
Part of the Second Language Learning and Teaching book series (SLLT)

Abstract

Based on local teaching and assessment considerations, this study aimed at investigating academic writing teachers’ design practices as specification writers and writing test raters. On the other hand, test takers’ conceptions of writing assessment and score interpretation were addressed. The aim was to capture a comprehensive view of writing assessment in an EFL context. To this end, a rating scale questionnaire was administered to 10 academic writing teachers in different Tunisian universities. Another rating scale questionnaire was administered to 25 third year English students in an EFL context. This study essentially dealt with theoretical and operational writing construct definition throughout test development processes for test designers, and through sitting for the test for test takers. Students’ test scores were obtained to investigate social aspects of writing assessment in the Tunisian setting. The quantitative data for this paper were analysed using SPSS and indicated there is a gap between teachers and students’ views on the writing construct and what they endorsed and represented concretely. Both teachers and students’ questionnaires along with test scores indicated that there is a remarkable focus on writing as a linguistic competence, while other (social, pragmatic, and communicative) competences are overlooked in the assessment process. A subsequent discussion of the study findings and its theoretical, pedagogical and methodological implications for the local writing assessment context was maintained.

Keywords

Writing assessment Test specifications Construct, theoretical and operational definition Validity Quantitative analysis 

References

  1. Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation. Cambridge: Cambridge University Press.Google Scholar
  2. Bachman, L. F. (1990). Fundamental considerations of language testing. Oxford: Oxford University Press.Google Scholar
  3. Bachman, L. F. (2004). Statistical analyses for language assessment. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  4. Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests. Oxford: Oxford University Press.Google Scholar
  5. Barkaoui, K. (2007). Teaching writing to second language learners: Insights from theory and research. TESL Reporter, 40(1), 35–48.Google Scholar
  6. Bereiter, C., & Scardamalia, M. (1987). The psychology of written composition. Hillsdale, NJ: Erlbaum.Google Scholar
  7. Brown, J. D. (1991). Do English and ESL faculties rate writing sample differently? TESOL Quarterly, 25(4), 587–603.  https://doi.org/10.2307/3587078.CrossRefGoogle Scholar
  8. Brown, J. D. (1996). Testing in language programs. Upper Saddle River, NJ: Prentice Hall.Google Scholar
  9. Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education. London: Routledge.Google Scholar
  10. Cumming, A. (2002). Assessing L2 writing: Alternative constructs and ethical dilemmas. Assessing Writing, 8(2), 73–83.  https://doi.org/10.1016/S1075-2935(02)00047-8.CrossRefGoogle Scholar
  11. Cumming, A., Kantor, R., & Powers, E. D. (2001). Scoring TOEFL essays and TOEFL prototype writing tasks: An investigation into raters’ decision making and developing of preliminary analytic framework [Monograph]. TOEFL Monograph Series. Princeton, NJ: Educational Testing Service.Google Scholar
  12. Flower, L., & Hayes, J. (1981). A cognitive process theory of writing. College Composition and Communication, 31, 21–32.  https://doi.org/10.2307/356600.CrossRefGoogle Scholar
  13. Hidri, S. (2014). Developing and evaluating a dynamic assessment of listening comprehension in an EFL context. Language Testing in Asia, 4(4), 1–19.  https://doi.org/10.1186/2229-0443-4-4.Google Scholar
  14. Hidri, S. (2015). Conceptions of assessment: Investigating what assessment means to secondary and university teachers. Arab Journal of Applied Linguistics, 1(1), 19–43.Google Scholar
  15. Hyland, K. (2003). Second language writing. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  16. Kroll, B., & Reid, J. (1994). Guidelines for designing writing prompts: Clarifications, caveats and cautions. Journal of Second Language Writing, 3(3), 231–255.  https://doi.org/10.1016/1060-3743(94)90018-3.CrossRefGoogle Scholar
  17. Lumely, T. (2002). Assessment criteria in a large-scale writing test: What do they really mean to raters? Language Testing, 19, 246–276.  https://doi.org/10.1191/0265532202lt230.CrossRefGoogle Scholar
  18. Lynch, B. K., & Davidson, F. (1994). Criterion-referenced language test development: Linking curricula, teachers and tests. TESOL Quarterly, 28(4), 727–743.  https://doi.org/10.2307/3587557.CrossRefGoogle Scholar
  19. Malone, M. E. (2013). The essentials of assessment literacy: Contrasts between testers and users. Language Testing, 30(3), 329–344.  https://doi.org/10.1177/0265532213480129.CrossRefGoogle Scholar
  20. Messick, S. (1989). Validity. New York: American Council on Education Macmillan.Google Scholar
  21. Moss, P. A. (2003). Reconceptualizing validity for classroom assessment. Educational Measurement: Issues and Practice, 22(4), 13–25.  https://doi.org/10.1111/j.1745-3992.2003.tb00140.x.CrossRefGoogle Scholar
  22. Popham, W. J. (1978). Criterion-referenced measurement. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
  23. Reid, J., & Kroll, B. (1995). Designing and assessing effective classroom writing assignments for NES and ESL students. Journal of Second Language Writing, 4(1), 17–41.  https://doi.org/10.1016/1060-3743(95)90021-7.CrossRefGoogle Scholar
  24. Ruth, L., & Murphy, S. (1984). Designing topics for writing assessment: Problems of meaning. College Composition and Communication, 35(4), 410–422.CrossRefGoogle Scholar
  25. Swales, J. (1990). Genre analysis: English in academic and research settings. New York: Cambridge University Press.Google Scholar
  26. Swedler-Brown, C. (1993). ESL essay evaluation: The influence of sentence-level and rhetorical features. Journal of Second Language Writing, 2(1), 3–17.  https://doi.org/10.1016/1060-3743(93)90003.CrossRefGoogle Scholar
  27. Weigle, S. C. (1999). Investigating rater/prompt interactions in writing assessment: Quantitative and qualitative approaches. Assessing Writing, 6(2), 145–178.  https://doi.org/10.1016/S1075-2935(00)00010-6.CrossRefGoogle Scholar
  28. Weigle, S. C. (2002). Assessing writing. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  29. Weir, C. J. (2005). Language testing and validation. Hampshire: Palgrave Macmillan.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Faculty of Human and Social Sciences of TunisTunisTunisia

Personalised recommendations