Advertisement

Digital Exams in Engineering Education

  • Meta Keijzer-de RuijterEmail author
  • Silvester DraaijerEmail author
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1014)

Abstract

Digital exams are rather uncommon in engineering education because general e-assessment platforms lack the ability to use advanced item types that mimic general engineering problem-solving processes and award partial scores. However, it is possible to develop such advanced items with Maple T.A.. We describe how such items are structured in scenarios and developed for a second year bachelor’s-level material science course that ran three times at the Delft University of Technology. We evaluate how these items function in practice, are scored and perform from an educational measurement perspective. The paper discusses the results of the study and future directions for development of digital exams in engineering courses.

Keywords

Digital exam e-Assessment Question partitioning Partial credit Scenario Engineering education Maple T.A. 

References

  1. 1.
    Draaijer, S., Warburton, B.: The emergence of large-scale computer assisted summative examination facilities in higher education. In: Kalz, M., Ras, E. (eds.) CAA 2014. CCIS, vol. 439, pp. 28–39. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-08657-6_3CrossRefGoogle Scholar
  2. 2.
    Draaijer, S., Jefferies, A., Somers, G.: Online proctoring for remote examination: a state of play in higher education in the EU. In: Ras, E., Guerrero Roldán, A.E. (eds.) TEA 2017. CCIS, vol. 829, pp. 96–108. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-97807-9_8CrossRefGoogle Scholar
  3. 3.
    Bacon, D.R.: Assessing learning outcomes: a comparison of multiple-choice and short-answer questions in a marketing context. J. Mark. Educ. 25, 31–36 (2003).  https://doi.org/10.1177/0273475302250570CrossRefGoogle Scholar
  4. 4.
    Scott, M., Stelzer, T., Gladding, G.: Evaluating multiple-choice exams in large introductory physics courses. Phys. Rev. Spec. Top. - Phys. Educ. Res. 2, 020102 (2006).  https://doi.org/10.1103/PhysRevSTPER.2.020102CrossRefGoogle Scholar
  5. 5.
    Lawson, W.D.: Reliability and validity of FE exam scores for assessment of individual competence, program accreditation, and college performance. J. Prof. Issues Eng. Educ. Pract. 133, 320–326 (2007).  https://doi.org/10.1061/(ASCE)1052-3928(2007)133:4(320)CrossRefGoogle Scholar
  6. 6.
    Hambleton, R.K., Jones, R.W.: An NCME instructional module on: comparison of classical test theory and item response theory and their applications to test development. Educ. Meas. Issues Pract. 12, 38–47 (1993).  https://doi.org/10.1111/j.1745-3992.1993.tb00543.xCrossRefGoogle Scholar
  7. 7.
    Ashton, H.S., Youngson, M.A.: Creating questions for automatic assessment in mathematics. LTSN MSOR Maths CAA Series, pp. 1–11 (2004)Google Scholar
  8. 8.
    Downing, S.M.: Construct-irrelevant variance and flawed test questions: do multiple-choice item-writing principles make any difference? Acad. Med. 77, 103–104 (2002)CrossRefGoogle Scholar
  9. 9.
    Draaijer, S., Hartog, R.: Design patterns for digital item types in higher education. E-J. Instr. Sci. Technol. 10, 1–31 (2007)Google Scholar
  10. 10.
    Hartog, R., Draaijer, S., Rietveld, L.C.: Practical aspects of task allocation in design and development of digital closed questions in higher education. Pract. Assess. Res. Eval. 13, 2–15 (2008)Google Scholar
  11. 11.
    Gierl, M.J., Haladyna, T.M.: Automatic Item Generation: Theory and Practice. Routledge, New York (2012)CrossRefGoogle Scholar
  12. 12.
    Ashton, H.S., Beevers, C.E., Korabinski, A.A., Youngson, M.A.: Incorporating partial credit in computer-aided assessment of mathematics in secondary education. Br. J. Educ. Technol. 37, 93–119 (2006)CrossRefGoogle Scholar
  13. 13.
    Beevers, C.E., Wild, D.G., McGuine, G.R., Fiddes, D.J., Youngson, M.A.: Issues of partial credit in mathematical assessment by computer. ALT-J. 7, 26–32 (1999)CrossRefGoogle Scholar
  14. 14.
    Clariana, R., Wallace, P.: Paper-based versus computer-based assessment: key factors associated with the test mode effect. Br. J. Educ. Technol. 33, 593–602 (2002)CrossRefGoogle Scholar
  15. 15.
    Parshall, C.G., Harmes, J.C.: The design of innovative item types: targeting constructs, selecting innovations, and refining prototypes (2008)Google Scholar
  16. 16.
    Draaijer, S., Van Gastel, L., Peeters, V., Frinking, P., Reumer, C.: Flexibilisering van Toetsing [Flexibility in Testing and Assessment]. Digitale Universiteit, Utrecht (2004)Google Scholar
  17. 17.
    van Someren, M.W., Barnard, Y.F., Sandberg, J.A.C.: The Think Aloud Method: A Practical Guide to Modelling Cognitive Processes. Academic Press, London (1994)Google Scholar
  18. 18.
    Goedee, C., Keijzer-de Ruijter, M., Offerman, E.: Digitaal toetsen van Engineering Lesstof [Digital testing of Engineering Instructional Materials]. Examens. Tijdschr. voor Toetpraktijk., 16–24 (2017)Google Scholar
  19. 19.
    Newell, A., Simon, H.A.: Human Problem Solving. Prentice-Hall, Upper Saddle River (1972)Google Scholar
  20. 20.
    Spilt, J.L., Leflot, G., Onghena, P., Colpin, H.: Use of praise and reprimands as critical ingredients of teacher behavior management: effects on children’s development in the context of a teacher-mediated classroom intervention. Prev. Sci. 17, 732–742 (2016).  https://doi.org/10.1007/s11121-016-0667-yCrossRefGoogle Scholar
  21. 21.
    Pintrich, P.R., Blumenfeld, P.C.: Classroom experience and children’s self-perceptions of ability, effort, and conduct. J. Educ. Psychol. 77, 646 (1985)CrossRefGoogle Scholar
  22. 22.
    Worrall, C., Worrall, N., Meldrum, C.: The consequences of teacher praise and criticism. Educ. Psychol. 3, 127–136 (1983).  https://doi.org/10.1080/0144341830030204CrossRefGoogle Scholar
  23. 23.
    Gable, R.A., Hester, P.H., Rock, M.L., Hughes, K.G.: Back to basics: rules, praise, ignoring, and reprimands revisited. Interv. Sch. Clin. 44, 195–205 (2009).  https://doi.org/10.1177/1053451208328831CrossRefGoogle Scholar
  24. 24.
    Deci, E.L., Ryan, R.M.: Intrinsic Motivation and Self-Determination in Human Behavior. Springer, New York (1985).  https://doi.org/10.1007/978-1-4899-2271-7CrossRefGoogle Scholar
  25. 25.
    McGuire, G.R., Youngson, M.A., Korabinski, A.A., McMillan, D.: Partial credit in mathematics exams - a comparison of traditional and CAA exams. Presented at the 6th International Computer Assisted Assessment Conference, Loughborough, Loughborough University, UK (2002)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Delft University of TechnologyDelftThe Netherlands
  2. 2.Faculty of Behavioural and Movement Sciences, Department of Research and Theory in EducationVrije Universiteit AmsterdamAmsterdamThe Netherlands

Personalised recommendations