Skip to main content

On the Design and Development of an Assessment System with Adaptive Capabilities

  • Conference paper
  • First Online:
Book cover Methodologies and Intelligent Systems for Technology Enhanced Learning, 8th International Conference (MIS4TEL 2018)

Abstract

Individual assessment is an important tool in this society. Tests can be created according to the Classical Test Theory (CTT) or to the Item Response Theory (IRT), the latter giving the possibility to build Computerized Adaptive Testing (CAT) systems. In such a context, the paper introduces the available systems for CTT, IRT and CAT, highlights the main characteristics that are taken as initial requirements for the design of a novel system, called UTS (UnivAQ Test Suite), whose architecture and initial functionalities are presented in the paper.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Asuni, N.: TCExam - Open Source Computer-Based Assessment Software (2017). https://tcexam.org/

  2. Baker, F., Seock-Ho, K.: Item Response Theory. Dekker Media, New York (2012)

    MATH  Google Scholar 

  3. Bergsten, H.: JavaServer Faces: Building Web-Based User Interfaces. O’Reilly Media, Inc., Sebastopol (2004)

    Google Scholar 

  4. Birnbaum, A.: Some latent trait models and their use in inferring an examinee’s ability. In: Statistical Theories of Mental Test Scores, pp. 395–479 (1968)

    Google Scholar 

  5. van Boxel, M., Eggen, T.: The Implementation of Nationwide High Stakes Computerized (adaptive) Testing in the Netherlands (2017)

    Google Scholar 

  6. Brannick, M.: Prometric: Trusted Test Development and Delivery Provider (2017). https://www.prometric.com/en-us/Pages/home.aspx

  7. Briscoe-Smith, A., Evangelopoulos, N.: Case-based grading: a conceptual introduction. In: Proceedings of ISECON 2002, vol. 19 (2002)

    Google Scholar 

  8. Chalmers, P.: mirtCAT: Computerized Adaptive Testing with Multidimensional Item Response Theory, July 2017. https://cran.r-project.org/web/packages/mirtCAT/index.html

  9. Chalmers, R.P., et al.: mirt: a multidimensional item response theory package for the R environment. J. Stat. Softw. 48(6), 1–29 (2012)

    Article  Google Scholar 

  10. Chang, S.H., Lin, P.C., Lin, Z.C.: Measures of partial knowledge and unexpected responses in multiple-choice tests. J. Educ. Technol. Soc. 10(4), 95–109 (2007)

    Google Scholar 

  11. Cole, J., Foster, H.: Using Moodle: Teaching with the Popular Open Source Course Management System. O’Reilly Media, Inc., Sebastopol (2007)

    Google Scholar 

  12. DeVellis, R.F.: Classical test theory. Med. Care 44(11), S50–S59 (2006)

    Article  Google Scholar 

  13. Edwards, S.H., Perez-Quinones, M.A.: Web-CAT: automatically grading programming assignments. In: ACM SIGCSE Bulletin, vol. 40, pp. 328–328 (2008)

    Article  Google Scholar 

  14. Embretson, S.E., Reise, S.P.: Item Response Theory. Psychology Press, New York (2013)

    Google Scholar 

  15. Gikandi, J.W., Morrow, D., Davis, N.E.: Online formative assessment in higher education: a review of the literature. Comput. Educ. 57(4), 2333–2351 (2011)

    Article  Google Scholar 

  16. Goncalves, A.: Java persistence API. In: Beginning Java EE 7, pp. 103–124. Springer (2013)

    Google Scholar 

  17. Guzmán, E., Conejo, R.: Self-assessment in a feasible, adaptive web-based testing system. IEEE Trans. Educ. 48(4), 688–695 (2005)

    Article  Google Scholar 

  18. Harlen, W., James, M.: Assessment and learning: differences and relationships between formative and summative assessment. Assess. Educ. Principles Policy Pract. 4(3), 365–379 (1997)

    Article  Google Scholar 

  19. Huang, Y.M., Lin, Y.T., Cheng, S.C.: An adaptive testing system for supporting versatile educational assessment. Comput. Educ. 52(1), 53–67 (2009)

    Article  Google Scholar 

  20. Innamorati, C.: Multimedia Web-Based Testing System (2018). http://test.med.univaq.it/

  21. Knight, P., Yorke, M.: Assessment, Learning and Employability. McGraw-Hill Education, Maidenhead (2003)

    Google Scholar 

  22. Kröhne, U.: Multidimensional Adaptive Testing Environment (MATE) - Software for the Implementation of Computerized Adaptive Tests (2011)

    Google Scholar 

  23. Lee, S.Y., Mott, B.W., Lester, J.C.: Real-time narrative-centered tutorial planning for story-based learning. In: Intelligent Tutoring Systems, pp. 476–481. Springer (2012)

    Google Scholar 

  24. Leff, A., Rayfield, J.T.: Web-application development using the model/view/controller design pattern. In: 2001 Proceeding of the Fifth IEEE International Enterprise Distributed Object Computing Conference, EDOC 2001, pp. 118–127 (2001)

    Google Scholar 

  25. Van der Linden, W.J., Glas, C.A.: Computerized Adaptive Testing: Theory and Practice. Springer, Heidelberg (2000)

    Book  Google Scholar 

  26. Lingling, M., Xiaojie, Q., Zhihong, Z., Gang, Z., Ying, X.: An assessment tool for assembly language programming. In: 2008 International Conference on Computer Science and Software Engineering, vol. 5, pp. 882–884 (2008)

    Google Scholar 

  27. Lord, F.M., Novick, M.R.: Statistical Theories of Mental Test Scores. IAP, Charlotte (2008)

    MATH  Google Scholar 

  28. Magis, D., Raîche, G.: catR: an R package for computerized adaptive testing. Appl. Psychol. Meas. 35(7), 576–577 (2011)

    Article  Google Scholar 

  29. McCann Associates: McCann Associates - Changing the Way the World Learns (2017). http://www.mccanntesting.com

  30. Meyer, J.P., Zhu, S.: Fair and equitable measurement of student learning in MOOCs: an introduction to item response theory, scale linking, and score equating. Res. Pract. Assess. 8, 26–39 (2013)

    Google Scholar 

  31. Mislevy, R.J.: Recent developments in the factor analysis of categorical variables. ETS Res. Rep. Ser. 1985(1) (1985)

    Article  Google Scholar 

  32. Oppl, S., Reisinger, F., Eckmaier, A., Helm, C.: A flexible online platform for computerized adaptive testing. Int. J. Educ. Technol. High. Educ. 14(1), 2 (2017). https://doi.org/10.1186/s41239-017-0039-0

    Article  Google Scholar 

  33. Rasch, G.: Probabilistic Models for Some Intelligence and Attainment Tests. Danmarks Paedagogiske Institut, Copenhagen (1960)

    Google Scholar 

  34. Reckase, M.D.: An interactive computer program for tailored testing based on the one-parameter logistic model. Behav. Res. Methods 6(2), 208–212 (1974)

    Article  Google Scholar 

  35. Salcedo, P., Pinninghoff, M.A., Contreras, R.: Computerized adaptive tests and item response theory on a distance education platform. In: Mira, J., Álvarez, J.R. (eds.) Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, pp. 613–621. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  36. Scalise, K., Allen, D.D.: Use of open-source software for adaptive measurement: concerto as an R-based computer adaptive development and delivery platform. Br. J. Math. Stat. Psychol. 68(3), 478–496 (2015)

    Article  Google Scholar 

  37. Swaminathan, H., Gifford, J.A.: Bayesian estimation in the two-parameter logistic model. Psychometrika 50(3), 349–364 (1985)

    Article  Google Scholar 

  38. Tsutakawa, R.K.: Estimation of two-parameter logistic item response curves. J. Educ. Stat. 9(4), 263–276 (1984)

    Article  Google Scholar 

  39. Urry, V.W.: Tailored testing: a successful application of latent trait theory. J. Educ. Meas. 14(2), 181–196 (1977)

    Article  Google Scholar 

  40. Varaksin, O.: PrimeFaces Cookbook. Packt Publishing Ltd., Birmingham (2013)

    Google Scholar 

  41. Venables, W.N., Smith, D.M.: An Introduction to R. Network Theory Ltd., Bristol (2009)

    Google Scholar 

  42. Vittorini, P., Michetti, M., di Orio, F.: A SOA statistical engine for biomedical data. Comput. Methods Programs Biomed. 92(1), 144–153 (2008)

    Article  Google Scholar 

  43. Weiss, D.J.: FastTest | Computerized Adaptive Testing, Educational Assessment (2017). http://www.assess.com/fasttest/

  44. Weiss, D.J., Kingsbury, G.: Application of computerized adaptive testing to educational problems. J. Educ. Meas. 21(4), 361–375 (1984)

    Article  Google Scholar 

  45. Wietsma, T.: OSCATS: Open Source Computerized Adaptive Testing System, March 2016. https://github.com/tristanwietsma/oscats

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pierpaolo Vittorini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bernardi, A. et al. (2019). On the Design and Development of an Assessment System with Adaptive Capabilities. In: Di Mascio, T., et al. Methodologies and Intelligent Systems for Technology Enhanced Learning, 8th International Conference. MIS4TEL 2018. Advances in Intelligent Systems and Computing, vol 804. Springer, Cham. https://doi.org/10.1007/978-3-319-98872-6_23

Download citation

Publish with us

Policies and ethics