Advertisement

Test Method and Response Format

  • Jufang KongEmail author
Chapter

Abstract

This chapter mainly deals with the concepts of test method and response format. In this chapter, the two most widely used response formats in reading comprehension tests are revisited with their respective strengths and weaknesses outlined, i.e., multiple choice question (MCQ for short) format and short answer question (SAQ for short) format. These two response formats were adopted as cases to help investigate the test method effect(s) on reading comprehension test performance.

References

  1. Alderson, J. C. (1990). Testing reading comprehension skills (Part Two) Getting students to talk about taking a reading test (A pilot study). Reading in a Foreign Language, 7(1), 465–503.Google Scholar
  2. Bachman, L. F. (1991). What does language testing have to offer? TESOL Quarterly, 25(4), 671–704.CrossRefGoogle Scholar
  3. Bachman, L. F., & Palmer, A. S. (1981). A multitrait-multimethod investigation into the construct validity of six tests of speaking and reading. In A. S. Palmer, P. J. M. Groot, & G. A. Trosper (Eds.), The construct validation of tests of communicative competence (pp. 149–165). Washington: Teachers of English to Speakers of Other Languages.Google Scholar
  4. Bachman, L. F., & Palmer, A. S. (1982). The construct validation of some components of communicative Proficiency. TESOL Quarterly, 16(4), 449–465.CrossRefGoogle Scholar
  5. Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests. Oxford: Oxford University Press.Google Scholar
  6. Bacon, D. R. (2003). Assessing learning outcomes: A comparison of multiple-choice and short-answer questions in a marketing context. Journal of Marketing Education, 25(1), 31–36.CrossRefGoogle Scholar
  7. Bennett, R. E. (1993). On the meanings of constructed response. In R. E. Bennett, & W. C. Ward (Eds.), Construction versus choice in cognitive measurement: Issues in constructed response, performance testing, and portfolio assessment (pp. 1–27). Hillsdale: Lawrence Erlbaum Associates Inc.Google Scholar
  8. Bennett, R. E., Ward, W. C., Rock, D. A., & LaHart, C. (1990). Toward a framework for constructed-response items. New Jersey: Princeton.CrossRefGoogle Scholar
  9. Brantmeier, C. (2005). Effects of reader’s knowledge, text type, and test type on L1 and L2 reading comprehension in Spanish. The Modern Language Journal, 89(1), 37–53.CrossRefGoogle Scholar
  10. Brown, H. D. (2004). Language assessment: Principles and classroom practices. New York: Pearson Education.Google Scholar
  11. Chen, J. (2006). Effects of text types and response formats on language proficiency in EFL reading comprehension. Unpublished doctoral dissertation, Shanghai International Studies University.Google Scholar
  12. Clifford, R. T. (1981). Convergent and discriminant validation of integrated and unitary language skills: The need for a research model. In A. S. Palmer, P. J. M. Groot, & G. A. Trosper (Eds.), The construct validation of tests of communicative competence (pp. 149–165). Washington: Teachers of English to Speakers of Other Languages.Google Scholar
  13. Cordon, L. A., & Day, J. D. (1996). Strategy use on standardized reading comprehension tests. Journal of Educational Psychology, 88(2), 288–295.CrossRefGoogle Scholar
  14. Cutting, L. E., & Scarborough, H. S. (2006). Prediction of reading comprehension: Relative contributions of word recognition, language proficiency, and other cognitive skills can depend on how comprehension is measured. Scientific Studies of Reading, 10(3), 277–299.CrossRefGoogle Scholar
  15. Farthing, D. W., Jones, D. M., & McPhee, D. (1998). Permutational multiple-choice questions: An objective and efficient alternative to essay-type examination questions. Paper Presented at the ITiCSE’98, Dublin, Ireland.Google Scholar
  16. Gibbs, W. J. (1995). An approach to designing computer-based evaluation of student constructed responses: Effects on achievement and instructional time. Journal of Computing in Higher Education, 6(2), 99–119.CrossRefGoogle Scholar
  17. Graesser, A. C., Robertson, S. P., Lovelace, E. R., & Swinehart, D. M. (1980). Answers to why- questions expose the organization of story plot and predict recall of actions. Journal of Verbal Learning and Verbal Behavior, 19(1), 110–119.CrossRefGoogle Scholar
  18. Hughes, A. (2003). Testing for language teachers. Cambridge: Cambridge University Press.Google Scholar
  19. In’nami, Y., & Koizumi, R. (2009). A meta-analysis of test format effects on reading and listening test performance: Focus on multiple-choice and open-ended formats. Language Testing, 26(2), 219–244.Google Scholar
  20. Kastner, M., & Stangl, B. (2011). Multiple choice and constructed response tests: Do test format and scoring matter? Procedia Social and Operational Sciences, 12(5), 263–273.CrossRefGoogle Scholar
  21. Katz, S., Lautenschlager, G., Blackburn, A., & Harris, F. (1990). Answering reading comprehension items without passages on the SAT. Psychological Science, 1(2), 122–127.CrossRefGoogle Scholar
  22. Kintsch, W., & Yarbrough, J. C. (1982). Role of rhetorical structure in text comprehension. Journal of Educational Psychology, 74(6), 828–834.CrossRefGoogle Scholar
  23. Lewkowicz, J. A. (1983). Method effect on testing reading comprehension: A comparison of three methods. Unpublished MA Thesis, University of Lancaster.Google Scholar
  24. Li, X., & Zeng, Y. (2017). A comparative study on cognitive processes involved in reading comprehension tests in different response formats: Based on the evidences from students. Foreign Language Testing and Teaching, 3(18–24), 50.Google Scholar
  25. Liu, F. (2009). The effect of three test methods on reading comprehension: An experiment. Asian Social Science, 5(6), 147–153.CrossRefGoogle Scholar
  26. Liu, J. (1998). The effect of test methods on testing reading. Foreign Language Teaching and Research, 2, 48–52.CrossRefGoogle Scholar
  27. Osterlind, S. J. (2002). Constructing test items: Multiple-choice, constructed-response, performance and other formats. New York: Kluwer Academic Publishers.Google Scholar
  28. Ozuru, Y., Best, R., Bell, C., Witherspoon, A., & McNamara, D. S. (2007). Influence of question format and text availability on the assessment of expository text comprehension. Cognition and Instruction, 25(4), 399–438.CrossRefGoogle Scholar
  29. Powell, J. L., & Gillespie, C. (1990). Assessment: All tests are not created equally. Sarasota: Paper presented at the Annual Meeting of the American Reading Forum.Google Scholar
  30. Reder, L. M., & Anderson, J. R. (1980). A comparison of texts and their summaries: Memorial consequences. Journal of Verbal Learning and Verbal Behavior, 19(2), 121–134.CrossRefGoogle Scholar
  31. Rodriguez, M. C. (2003). Construct equivalence of multiple-choice and constructed-response items: A random effects synthesis of correlations. Journal of Educational Measurement, 40(2), 163–184.CrossRefGoogle Scholar
  32. Roediger, H. L., & Marsh, E. J. (2005). The positive and negative consequences of multiple-choice testing. Journal of Experimental Psychology. Learning, Memory, and Cognition, 31(5), 1155–1159.CrossRefGoogle Scholar
  33. Rogers, W. T., & Harley, D. (1999). An empirical comparison of three-and four-choice items and tests: Susceptibility to testwiseness and internal consistency reliability. Educational and Psychological Measurement, 59(2), 234–247.CrossRefGoogle Scholar
  34. Rupp, A. A., Ferne, T., & Choi, H. (2006). How assessing reading comprehension with multiple-choice questions shapes the construct: A cognitive processing perspective. Language Testing, 23(4), 441–474.CrossRefGoogle Scholar
  35. Shohamy, E. (1983). Six methods for testing reading comprehension. In M. Wesche, & M. DeBrisay (Eds.), Proceedings of the 5th language testing colloquium (pp. 77–94). Ottawa.Google Scholar
  36. Shohamy, E. (1984). Does the testing method make a difference? The case of reading comprehension. Language Testing, 1(2), 147–170.CrossRefGoogle Scholar
  37. Shohamy, E., & Inbar, O. (1991). Construct validation of listening comprehension tests: The effect of text and question type. Language Testing, 8(1), 23–40.CrossRefGoogle Scholar
  38. Tsagari, C. (1994). Method effect on testing reading comprehension: How far can we go? Unpublished MA Thesis, University of Lancaster.Google Scholar
  39. Ventouras, E., Triantis, D., Tsiakas, P., & Stergiopoulos, C. (2010). Comparison of examination methods based on multiple-choice questions and constructed-response questions using personal computers. Computers & Education, 54(2), 455–461.CrossRefGoogle Scholar
  40. Walstad, W. B., & Becker, W. E. (1994). Achievement differences on multiple-choice and essay tests in economics. The American Economic Review, 84(2), 193–196.Google Scholar
  41. Wolf, D. F. (1993). A comparison of assessment tasks used to measure FL reading comprehension. Modern Language Journal, 77(4), 473–489.CrossRefGoogle Scholar
  42. Zamel, V. (1992). Writing one’s way into reading. TESOL Quarterly, 26(3), 463–486.CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Zhejiang Normal UniversityJinhuaChina

Personalised recommendations