Advertisement

On the Design and Development of an Assessment System with Adaptive Capabilities

  • Angelo Bernardi
  • Carlo Innamorati
  • Cesare Padovani
  • Roberta Romanelli
  • Aristide Saggino
  • Marco Tommasi
  • Pierpaolo Vittorini
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 804)

Abstract

Individual assessment is an important tool in this society. Tests can be created according to the Classical Test Theory (CTT) or to the Item Response Theory (IRT), the latter giving the possibility to build Computerized Adaptive Testing (CAT) systems. In such a context, the paper introduces the available systems for CTT, IRT and CAT, highlights the main characteristics that are taken as initial requirements for the design of a novel system, called UTS (UnivAQ Test Suite), whose architecture and initial functionalities are presented in the paper.

Keywords

Assessment CTT IRT CAT 

References

  1. 1.
    Asuni, N.: TCExam - Open Source Computer-Based Assessment Software (2017). https://tcexam.org/
  2. 2.
    Baker, F., Seock-Ho, K.: Item Response Theory. Dekker Media, New York (2012)zbMATHGoogle Scholar
  3. 3.
    Bergsten, H.: JavaServer Faces: Building Web-Based User Interfaces. O’Reilly Media, Inc., Sebastopol (2004)Google Scholar
  4. 4.
    Birnbaum, A.: Some latent trait models and their use in inferring an examinee’s ability. In: Statistical Theories of Mental Test Scores, pp. 395–479 (1968)Google Scholar
  5. 5.
    van Boxel, M., Eggen, T.: The Implementation of Nationwide High Stakes Computerized (adaptive) Testing in the Netherlands (2017)Google Scholar
  6. 6.
    Brannick, M.: Prometric: Trusted Test Development and Delivery Provider (2017). https://www.prometric.com/en-us/Pages/home.aspx
  7. 7.
    Briscoe-Smith, A., Evangelopoulos, N.: Case-based grading: a conceptual introduction. In: Proceedings of ISECON 2002, vol. 19 (2002)Google Scholar
  8. 8.
    Chalmers, P.: mirtCAT: Computerized Adaptive Testing with Multidimensional Item Response Theory, July 2017. https://cran.r-project.org/web/packages/mirtCAT/index.html
  9. 9.
    Chalmers, R.P., et al.: mirt: a multidimensional item response theory package for the R environment. J. Stat. Softw. 48(6), 1–29 (2012)CrossRefGoogle Scholar
  10. 10.
    Chang, S.H., Lin, P.C., Lin, Z.C.: Measures of partial knowledge and unexpected responses in multiple-choice tests. J. Educ. Technol. Soc. 10(4), 95–109 (2007)Google Scholar
  11. 11.
    Cole, J., Foster, H.: Using Moodle: Teaching with the Popular Open Source Course Management System. O’Reilly Media, Inc., Sebastopol (2007)Google Scholar
  12. 12.
    DeVellis, R.F.: Classical test theory. Med. Care 44(11), S50–S59 (2006)CrossRefGoogle Scholar
  13. 13.
    Edwards, S.H., Perez-Quinones, M.A.: Web-CAT: automatically grading programming assignments. In: ACM SIGCSE Bulletin, vol. 40, pp. 328–328 (2008)CrossRefGoogle Scholar
  14. 14.
    Embretson, S.E., Reise, S.P.: Item Response Theory. Psychology Press, New York (2013)Google Scholar
  15. 15.
    Gikandi, J.W., Morrow, D., Davis, N.E.: Online formative assessment in higher education: a review of the literature. Comput. Educ. 57(4), 2333–2351 (2011)CrossRefGoogle Scholar
  16. 16.
    Goncalves, A.: Java persistence API. In: Beginning Java EE 7, pp. 103–124. Springer (2013)Google Scholar
  17. 17.
    Guzmán, E., Conejo, R.: Self-assessment in a feasible, adaptive web-based testing system. IEEE Trans. Educ. 48(4), 688–695 (2005)CrossRefGoogle Scholar
  18. 18.
    Harlen, W., James, M.: Assessment and learning: differences and relationships between formative and summative assessment. Assess. Educ. Principles Policy Pract. 4(3), 365–379 (1997)CrossRefGoogle Scholar
  19. 19.
    Huang, Y.M., Lin, Y.T., Cheng, S.C.: An adaptive testing system for supporting versatile educational assessment. Comput. Educ. 52(1), 53–67 (2009)CrossRefGoogle Scholar
  20. 20.
    Innamorati, C.: Multimedia Web-Based Testing System (2018). http://test.med.univaq.it/
  21. 21.
    Knight, P., Yorke, M.: Assessment, Learning and Employability. McGraw-Hill Education, Maidenhead (2003)Google Scholar
  22. 22.
    Kröhne, U.: Multidimensional Adaptive Testing Environment (MATE) - Software for the Implementation of Computerized Adaptive Tests (2011)Google Scholar
  23. 23.
    Lee, S.Y., Mott, B.W., Lester, J.C.: Real-time narrative-centered tutorial planning for story-based learning. In: Intelligent Tutoring Systems, pp. 476–481. Springer (2012)Google Scholar
  24. 24.
    Leff, A., Rayfield, J.T.: Web-application development using the model/view/controller design pattern. In: 2001 Proceeding of the Fifth IEEE International Enterprise Distributed Object Computing Conference, EDOC 2001, pp. 118–127 (2001)Google Scholar
  25. 25.
    Van der Linden, W.J., Glas, C.A.: Computerized Adaptive Testing: Theory and Practice. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  26. 26.
    Lingling, M., Xiaojie, Q., Zhihong, Z., Gang, Z., Ying, X.: An assessment tool for assembly language programming. In: 2008 International Conference on Computer Science and Software Engineering, vol. 5, pp. 882–884 (2008)Google Scholar
  27. 27.
    Lord, F.M., Novick, M.R.: Statistical Theories of Mental Test Scores. IAP, Charlotte (2008)zbMATHGoogle Scholar
  28. 28.
    Magis, D., Raîche, G.: catR: an R package for computerized adaptive testing. Appl. Psychol. Meas. 35(7), 576–577 (2011)CrossRefGoogle Scholar
  29. 29.
    McCann Associates: McCann Associates - Changing the Way the World Learns (2017). http://www.mccanntesting.com
  30. 30.
    Meyer, J.P., Zhu, S.: Fair and equitable measurement of student learning in MOOCs: an introduction to item response theory, scale linking, and score equating. Res. Pract. Assess. 8, 26–39 (2013)Google Scholar
  31. 31.
    Mislevy, R.J.: Recent developments in the factor analysis of categorical variables. ETS Res. Rep. Ser. 1985(1) (1985)CrossRefGoogle Scholar
  32. 32.
    Oppl, S., Reisinger, F., Eckmaier, A., Helm, C.: A flexible online platform for computerized adaptive testing. Int. J. Educ. Technol. High. Educ. 14(1), 2 (2017).  https://doi.org/10.1186/s41239-017-0039-0CrossRefGoogle Scholar
  33. 33.
    Rasch, G.: Probabilistic Models for Some Intelligence and Attainment Tests. Danmarks Paedagogiske Institut, Copenhagen (1960)Google Scholar
  34. 34.
    Reckase, M.D.: An interactive computer program for tailored testing based on the one-parameter logistic model. Behav. Res. Methods 6(2), 208–212 (1974)CrossRefGoogle Scholar
  35. 35.
    Salcedo, P., Pinninghoff, M.A., Contreras, R.: Computerized adaptive tests and item response theory on a distance education platform. In: Mira, J., Álvarez, J.R. (eds.) Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, pp. 613–621. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  36. 36.
    Scalise, K., Allen, D.D.: Use of open-source software for adaptive measurement: concerto as an R-based computer adaptive development and delivery platform. Br. J. Math. Stat. Psychol. 68(3), 478–496 (2015)CrossRefGoogle Scholar
  37. 37.
    Swaminathan, H., Gifford, J.A.: Bayesian estimation in the two-parameter logistic model. Psychometrika 50(3), 349–364 (1985)CrossRefGoogle Scholar
  38. 38.
    Tsutakawa, R.K.: Estimation of two-parameter logistic item response curves. J. Educ. Stat. 9(4), 263–276 (1984)CrossRefGoogle Scholar
  39. 39.
    Urry, V.W.: Tailored testing: a successful application of latent trait theory. J. Educ. Meas. 14(2), 181–196 (1977)CrossRefGoogle Scholar
  40. 40.
    Varaksin, O.: PrimeFaces Cookbook. Packt Publishing Ltd., Birmingham (2013)Google Scholar
  41. 41.
    Venables, W.N., Smith, D.M.: An Introduction to R. Network Theory Ltd., Bristol (2009)Google Scholar
  42. 42.
    Vittorini, P., Michetti, M., di Orio, F.: A SOA statistical engine for biomedical data. Comput. Methods Programs Biomed. 92(1), 144–153 (2008)CrossRefGoogle Scholar
  43. 43.
    Weiss, D.J.: FastTest | Computerized Adaptive Testing, Educational Assessment (2017). http://www.assess.com/fasttest/
  44. 44.
    Weiss, D.J., Kingsbury, G.: Application of computerized adaptive testing to educational problems. J. Educ. Meas. 21(4), 361–375 (1984)CrossRefGoogle Scholar
  45. 45.
    Wietsma, T.: OSCATS: Open Source Computerized Adaptive Testing System, March 2016. https://github.com/tristanwietsma/oscats

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Angelo Bernardi
    • 3
  • Carlo Innamorati
    • 2
  • Cesare Padovani
    • 2
  • Roberta Romanelli
    • 1
  • Aristide Saggino
    • 1
  • Marco Tommasi
    • 1
  • Pierpaolo Vittorini
    • 2
  1. 1.University of Chieti-PescaraChietiItaly
  2. 2.University of L’AquilaL’AquilaItaly
  3. 3.Servizi Elaborazione Dati SpAL’AquilaItaly

Personalised recommendations