Advertisement

Conclusion and Future Directions

  • Jufang KongEmail author
Chapter

Abstract

This chapter is the final and concluding chapter of this book. In this chapter, the major findings of the empirical study reported in the previous three chapters are synthesized. The theoretical and the methodological implications of this study are outlined before its limitations are pointed out and the orientation for future studies is prospected.

References

  1. Alderson, J. C. (2000). Assessing reading. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  2. Baddeley, A. D. (1992). Working memory. Science, 255(5044), 556–559.CrossRefGoogle Scholar
  3. Baddeley, A. D., & Hitch, G. (1974). Working memory. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 8, pp. 47–89). New York: Academic Press.Google Scholar
  4. Barkaoui, K. (2011). Think-aloud protocols in research on essay rating: An empirical study of their veridicality and reactivity. Language Testing, 28(1), 51–75.CrossRefGoogle Scholar
  5. Bax, S. (2013). The cognitive processing of candidates during reading tests: Evidence from eye-tracking. Language Testing, 30(4), 441–465.CrossRefGoogle Scholar
  6. Bowles, M. A. (2010). The think-aloud controversy in second language research. London: Routledge.CrossRefGoogle Scholar
  7. Brunfaut, T., & McCray, G. (2015). Looking into test takers’ cognitive processes while completing reading tasks: A mixed-method eye-tracking and stimulated recall study. London: British Council.Google Scholar
  8. Chen, X. (2009). The validation study of the objective items in TEM4. Shanghai: Fudan University Press.Google Scholar
  9. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand-McNally.Google Scholar
  10. Cumming, A. H. (1996). Introduction: The concept of validation in language testing. In A. H. Cumming, & R. Berwick (Eds.), Validation in Language Testing (pp. 1–14). Clevedon: Multilingual Matters Ltd.Google Scholar
  11. Fang, X. (2011). The validation study of National Entrance Test of English for MA/MS Candidates (NETEM). Journal of Huaibei Normal University (Philosophy and Social Sciences), 32(3), 114–117.Google Scholar
  12. Francis, D. J., Snow, C. E., August, D., Carlson, C. D., Miller, J., & Iglesias, A. (2006). Measures of reading comprehension: A latent variable analysis of the diagnostic assessment of reading comprehension. Scientific Studies of Reading, 10(3), 301–322.CrossRefGoogle Scholar
  13. Fulcher, G. (2015). Re-examining language testing: A philosophical and social inquiry. London: Routledge.CrossRefGoogle Scholar
  14. Gass, S. M., & Mackey, A. (2000). Stimulated recall methodology in second language research. Mahwah: Lawrence Erlbaum Associates.Google Scholar
  15. Jacob, R. J. K., & Karn, K. S. (2003). Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. In J. Hyona, R. Radach, & H. Deubel (Eds.), The mind’s eye: Cognitive and applied aspects of eye movement research (pp. 573–603). Oxford: Elsevier.CrossRefGoogle Scholar
  16. Jin, Y., & Wu, J. (2010). A preliminary study of the validity of the internet-based CET-4—Factors affecting test takers’ perception of and performance on the test. Computer-assisted Foreign Language Education, 132, 3–10.Google Scholar
  17. Khalifa, H., & Weir, C. J. (2009). Examining reading: Research and practice in assessing second language reading. Cambridge: Cambridge University Press.Google Scholar
  18. Kintsch, W., Patel, V. L., & Ericsson, K. A. (1999). The role of long-term working memory in text comprehension. Psychologia: An International Journal of Psychology in the Orient, 42(4), 186–198.Google Scholar
  19. Kong, W. (2009). A multi-perspective validation of the TEM4 reading module—An exploratory study of the validity of TEM4 (2005) reading tasks. Unpublished doctoral dissertation, Shanghai International Studies University.Google Scholar
  20. Li, Q., & Kong, W. (2010). Validation of TEM-4 writing analytic rating scale: Multi-facet Rasch measurement. Computer-Assisted Foreign Language Education, 131, 19–25.Google Scholar
  21. Liversedge, S. P., Gilchrist, I. D., & Everling, S. (2011). The Oxford handbook of eye movements. Oxford: Oxford University Press.CrossRefGoogle Scholar
  22. Liu, M. (2013). Validation of TEM4 story retelling oral section rating scale. Journal of Zhejiang University (Humanities and Social Sciences), 43(6), 187–194.Google Scholar
  23. Liu, M. (2015). Justifying the validity of the listening-based retelling task in the National Matriculation English Test. Unpublished doctoral dissertation, Beijing Foreign Studies University.Google Scholar
  24. Magliano, J. P., Wiemer-Hastings, K., Millis, K. K., Munoz, B. D., & McNamara, D. (2002). Using latent semantic analysis to assess reader strategies. Behavior Research Methods, Instruments, & Computers, 34(2), 181–188.CrossRefGoogle Scholar
  25. Magliano, J. P., Todaro, S., Millis, K., Wiemer-Hastings, K., Kim, H. J., & McNamara, D. S. (2005). Changes in reading strategies as a function of reading training: A comparison of live and computerized training. Journal of Educational Computing Research, 32(2), 185–208.CrossRefGoogle Scholar
  26. McCarley, J. S., & Kramer, A. F. (2007). Eye movements as a window on perception and cognition. In R. Parasuraman, & M. Rizzo (Eds.), Neuroergonomics: The brain at work (pp. 95–112). New York: Oxford University Press.Google Scholar
  27. McCray, G., & Brunfaut, T. (2018). Investigating the construct measured by banked gap-fill items: Evidence from eye-tracking. Language Testing, 35(1), 51–73.CrossRefGoogle Scholar
  28. McNamara, D. S., Levinstein, I. B., & Boonthum, C. (2004). iSTART: Interactive strategy training for active reading and thinking. Behavior Research Methods, Instruments, & Computers, 36(2), 222–233.CrossRefGoogle Scholar
  29. Messick, S. (1993). Trait equivalence as construct validity of score interpretation across multiple methods of measurement. In R. E. Bennett, & W. C. Ward (Eds.), Construction versus choice in cognitive measurement (pp. 61–74). Hillsdale: Lawrence Erlbaum Associates.Google Scholar
  30. Millis, K., Kim, H. J., Todaro, S., Magliano, J. P., Wiemer-Hastings, K., & McNamara, D. S. (2004). Identifying reading strategies using latent semantic analysis: Comparing semantic benchmarks. Behavior Research Methods, Instruments, & Computers, 36(2), 213–221.CrossRefGoogle Scholar
  31. Mu, H. (2011). Corpus-based research on content validity of CET-4 cloze. Computer-Assisted Foreign Language Education, 140, 66–70.Google Scholar
  32. Peng, K. (2010). A validity study on listening comprehension tasks—From the perspective of assessment use argument. Unpublished doctoral dissertation, Shanghai International Studies University.Google Scholar
  33. Peng, K., & Zou, S. (2012). Construct validity of the grammar and vocabulary section of TEM4. Foreign Languages and Their Teaching, 267(6), 49–55.Google Scholar
  34. Phakiti, A. (2006). Modeling cognitive and metacognitive strategies and their relationships to EFL reading test performance. Melbourne Papers in Language Testing, 1, 53–95.Google Scholar
  35. Sawaki, Y. (2001). Comparability of conventional and computerized tests of reading in a second language. Language Learning and Technology, 5(2), 38–59.Google Scholar
  36. Segalowitz, N. (2003). Automaticity and second languages. In C. J. Doughty, & M. H. Long (Eds.), The handbook of second language acquisition (pp. 382–408). Malden: Blackwell.CrossRefGoogle Scholar
  37. The TEM Test Centre. (1997). The Test for English Majors (TEM) validation study. Shanghai: Shanghai Foreign Language Education Press.Google Scholar
  38. Townsend, J. T., & Ashby, F. G. (1983). Stochastic modeling of elementary psychological processes. Cambridge: Cambridge University Press.Google Scholar
  39. Van den Broek, P., Young, M., Tzeng, Y., & Linderholm, T. (1998). The landscape model of reading: Inferences and the online construction of a memory representation. In H. van Oostendorp, & S. R. Goldman (Eds.), The construction of mental representations during reading (pp. 71–98). Mahwah: Erlbaum.Google Scholar
  40. Wang, S., & Liu, S. (2007). The validity analyses of the general knowledge section in TEM8. Foreign Language Education, 28(5), 35–39.Google Scholar
  41. Weigle, S. C. (1994). Effects of training on raters of ESL compositions. Language testing, 11(2), 197–223.CrossRefGoogle Scholar
  42. Weir, C. J. (2005). Language testing and validation: An evidence-based approach. Basingstoke: Palgrave Macmillan.CrossRefGoogle Scholar
  43. Yang, H., & Weir, C. (1988). Validation study of the National College English Test. Shanghai: Shanghai Foreign Language Education Press.Google Scholar
  44. Yang, W. (2011). Response validity of the reading comprehension section in TEM4. Foreign Language Education, 32(6), 53–56.Google Scholar
  45. Yu, G., & Lin, S. (2014). A comparability study on the cognitive processes of taking graph-based GEPT-Advanced and IELTS-Academic writing tasks. LTTC-GEPT Research Report RG-02.Google Scholar
  46. Zhang, X., & Zeng, Y. (2009). Construct validation of reading-to-write tasks in large-scale writing assessment in China. Journal of PLA University of Foreign Languages, 32(1), 56–61.Google Scholar
  47. Zhang, X., Zeng, Y., & Zhang, J. (2010). Validation of a large-scale reading-to-writing test: Evidence from multi-faceted Rasch model analysis. Journal of PLA University of Foreign Languages, 33(2), 50–54.Google Scholar
  48. Zou, S. (2012). A corpus-based approach to the validation of TEM writing tests. Computer-Assisted Foreign Language Education, 143, 16–21.Google Scholar
  49. Zou, S., & Chen, W. (2010). TEM4 scoring validity and computer-assisted scoring. Computer-Assisted Foreign Language Education, 131, 56–72.Google Scholar
  50. Zou, S., Peng, K., & Kong, W. (2009). Exploring the construct validity of the general knowledge section in TEM8. Foreign Language in China, 6(1), 45–52.Google Scholar
  51. Zou, S., Zhang, Y., & Zhou, Y. (2002). The relationship between item types, reading strategies and test performance─a study of the response validity of TEM4 reading. Foreign Languages and Their Teaching, 158(5), 19–22.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Zhejiang Normal UniversityJinhuaChina

Personalised recommendations