Extending LMS to Support IRT-Based Assessment Test Calibration

  • Panagiotis Fotaris
  • Theodoros Mastoras
  • Ioannis Mavridis
  • Athanasios Manitsaris
Part of the Communications in Computer and Information Science book series (CCIS, volume 73)

Abstract

Developing unambiguous and challenging assessment material for measuring educational attainment is a time-consuming, labor-intensive process. As a result Computer Aided Assessment (CAA) tools are becoming widely adopted in academic environments in an effort to improve the assessment quality and deliver reliable results of examinee performance. This paper introduces a methodological and architectural framework which embeds a CAA tool in a Learning Management System (LMS) so as to assist test developers in refining items to constitute assessment tests. An Item Response Theory (IRT) based analysis is applied to a dynamic assessment profile provided by the LMS. Test developers define a set of validity rules for the statistical indices given by the IRT analysis. By applying those rules, the LMS can detect items with various discrepancies which are then flagged for review of their content. Repeatedly executing the aforementioned procedure can improve the overall efficiency of the testing process.

Keywords

e-learning Assessment Test Calibration Computer Aided Assessment Item Analysis Item Response Theory Learning Management System 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Haladyna, T.M.: Developing and Validating Multiple-Choice Test Items, 2nd edn. Lawrence Erlbaum Associates, Mahwah (1999)Google Scholar
  2. 2.
    Hambleton, R.K., Jones, R.W.: Comparison of Classical Test Theory and Item Response Theory and their Applications to Test Development. Educational Measurement: Issues and Practices 12, 38–46 (1993)CrossRefGoogle Scholar
  3. 3.
    Hambleton, R.K., Swaminathan, H.: Item Response Theory: Principles and Applications. Kluwer-Nijhoff Publishing, Boston (1987)Google Scholar
  4. 4.
    Schmeiser, C.B., Welch, C.J.: Test Development. In: Brennan, R.L. (ed.) Educational Measurement, 4th edn. Praeger Publishers, Westport (2006)Google Scholar
  5. 5.
    Flaugher, R.: Item Pools. In: Wainer, H. (ed.) Computerized Adaptive Testing: A Primer, 2nd edn. Lawrence Erlbaum Associates, Mahwah (2000)Google Scholar
  6. 6.
    Baker, F.B.: Item Response Theory: Parameter Estimation Techniques. Marcel Dekker, New York (1992)MATHGoogle Scholar
  7. 7.
    Moodle.org: Open-source Community-based Tools for Learning, http://moodle.org/
  8. 8.
    Blackboard Home, http://www.blackboard.com
  9. 9.
    Questionmark...Getting Results, http://www.questionmark.com
  10. 10.
    Hsieh, C., Shih, T.K., Chang, W., Ko, W.: Feedback and Analysis from Assessment Metadata in E-learning. In: 17th International Conference on Advanced Information Networking and Applications (AINA 2003), pp. 155–158. IEEE Computer Society, Los Alamitos (2003)Google Scholar
  11. 11.
    Hung, J.C., Lin, L.J., Chang, W., Shih, T.K., Hsu, H., Chang, H.B., Chang, H.P., Huang, K.: A Cognition Assessment Authoring System for E-Learning. In: 24th International Conference on Distributed Computing Systems Workshops (ICDCS 2004 Workshops), pp. 262–267. IEEE Computer Society, Los Alamitos (2004)Google Scholar
  12. 12.
    Costagliola, G., Ferrucci, F., Fuccella, V.: A Web-Based E-Testing System Supporting Test Quality Improvement. In: Leung, H., Li, F., Lau, R., Li, Q. (eds.) ICWL 2007. LNCS, vol. 4823, pp. 264–275. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  13. 13.
    Wu, I.L.: Model management system for IRT-based test construction decision support system. Decision Support Systems 27(4), 443–458 (2000)CrossRefGoogle Scholar
  14. 14.
    Chen, C.M., Duh, L.J., Liu, C.Y.: A Personalized Courseware Recommendation System Based on Fuzzy Item Response Theory. In: IEEE International Conference on e-Technology, e-Commerce and e-Service, pp. 305–308. IEEE Computer Society Press, Los Alamitos (2004)Google Scholar
  15. 15.
    Ho, R.G., Yen, Y.C.: Design and Evaluation of an XML-Based Platform-Independent Computerized Adaptive Testing System. IEEE Transactions on Education 48(2), 230–237 (2005)CrossRefGoogle Scholar
  16. 16.
    Sun, K.: An Effective Item Selection Method for Educational Measurement. In: Advanced Learning Technologies, pp. 105–106 (2000)Google Scholar
  17. 17.
    Yen, W., Fitzpatrick, A.R.: Item Response Theory. In: Brennan, R.L. (ed.) Educational Measurement, 4th edn. Praeger Publishers, Westport (2006)Google Scholar
  18. 18.
    Kim, S., Cohen, A.S.: A Comparison of Linking and Concurrent Calibration under Item Response Theory. Applied Psychological Measurement 22(2), 131–143 (1998)CrossRefGoogle Scholar
  19. 19.
    Embretson, S.E., Reise, S.P.: Item Response Theory for Psychologists. Lawrence Erlbaum, Mahwah (2000)Google Scholar
  20. 20.
    Assessment System Corporation: RASCAL (Rasch Analysis Program). Computer Software, Assessment Systems Corporation, St. Paul, Minnesota (1992)Google Scholar
  21. 21.
    Assessment System Corporation: ASCAL (2- and 3-parameter) IRT Calibration Program. Computer Software, Assessment Systems Corporation, St. Paul, Minnesota (1989)Google Scholar
  22. 22.
    Linacre, J.M., Wright, B.D.: A user’s guide to WINSTEPS. MESA Press, Chicago (2000)Google Scholar
  23. 23.
    Zimowski, M.F., Muraki, E., Mislevy, R.J., Bock, R.D.: BILOG-MG 3: Multiple-group IRT analysis and test maintenance for binary items. Computer Software, Scientific Software International, Chicago (1997)Google Scholar
  24. 24.
    Thissen, D.: MULTILOG user’s guide. Computer Software, Scientific Software International, Chicago (1991)Google Scholar
  25. 25.
    Muraki, E., Bock, R.D.: PARSCALE: IRT-based Test Scoring and Item Analysis for Graded Open-ended Exercises and Performance Tasks. Computer Software, Scientific Software International, Chicago (1993)Google Scholar
  26. 26.
    du Toit, M. (ed.): IRT from SSI. Scientific Software International. Lincolnwood, Illinois (2003)Google Scholar
  27. 27.
    Andrich, D., Sheridan, B., Luo, G.: RUMM: Rasch Unidimensional Measurement Model. Computer Software, RUMM Laboratory, Perth, Australia (2001)Google Scholar
  28. 28.
    von Davier, M.: WINMIRA: Latent Class Analysis, Dichotomous and Polytomous Rasch Models. Computer Software, Assessment Systems Corporation, St. Paul, Minnesota (2001)Google Scholar
  29. 29.
    Hanson, B.A.: IRT Command Language (ICL). Computer Software, http://www.b-a-h.com/software/irt/icl/index.html
  30. 30.
    Mead, A.D., Morris, S.B., Blitz, D.L.: Open-source IRT: A Comparison of BILOG-MG and ICL Features and Item Parameter Recovery, http://mypages.iit.edu/~mead/MeadMorrisBlitz2007.pdf
  31. 31.
    Hanson, B.A.: Estimation Toolkit for Item Response Models (ETIRM). Computer Software, http://www.b-a-h.com/software/cpp/etirm.html
  32. 32.
    Welch, B.B., Jones, K., Hobbs, J.: Practical programming in Tcl and Tk, 4th edn. Prentice Hall, Upper Saddle River (2003)Google Scholar
  33. 33.
  34. 34.
    General Public License, http://www.gnu.org/copyleft/gpl.html
  35. 35.
    Free Software Foundation, http://www.fsf.org/
  36. 36.
    Hulin, C.L., Lissak, R.I., Drasgow, F.: Recovery of Two- and Three-parameter Logistic Item Characteristic Curves: A Monte Carlo Study. Applied Psychological Measurement 6(3), 249–260 (1982)CrossRefGoogle Scholar
  37. 37.
    Swaminathan, H., Gifford, J.A.: Estimation of Parameters in the Three-parameter Latent Trait Model. In: Weiss, D.J. (ed.) New Horizons in Testing. Academic Press, New York (1983)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Panagiotis Fotaris
    • 1
  • Theodoros Mastoras
    • 1
  • Ioannis Mavridis
    • 1
  • Athanasios Manitsaris
    • 1
  1. 1.Department of Applied InformaticsUniversity of MacedoniaThessalonikiGreece

Personalised recommendations