Advertisement

Innovative Items for Computerized Testing

  • Cynthia G. Parshall
  • J. Christine Harmes
  • Tim Davey
  • Peter J. Pashley
Chapter
Part of the Statistics for Social and Behavioral Sciences book series (SSBS)

Abstract

As computer-based testing (CBT) becomes a dominant, if not the dominant,medium for delivering assessments, interest in the potential of innovative items has grown. Innovative items are those that make use of features and functions of the computer to deliver assessments that do things not easily done in traditional paper-and-pencil assessments.

Keywords

Response Item Response Action Item Type Input Device Computerize Adaptive Test 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. AICPA. (2004, February). AICPA, NASBA, and Prometric successfully pilot computer-based exam for CPAs. Retrieved April 9, 2005, from http://www.aicpa.org/download/news/2004_02_02.pdf.
  2. Bejar, I. I. (1991). A methodology for scoring open-ended architectural design problems. Journal of Applied Psychology, 76, 522–532.CrossRefGoogle Scholar
  3. Bennett, R. E. & Bejar, I. I. (1998). Validity and automated scoring: It’s not only the scoring. Educational Measurement: Issues & Practice, 17, 9–17.CrossRefGoogle Scholar
  4. Bennett, R. E., Goodman, M., Hessinger, J., Ligget, J., Marshall, G., Kahn, H. & Zack, J. (1997). Using multimedia in large-scale computer-based testing programs (Research Report No. RR-97-3). Princeton, NJ: Educational Testing Service.Google Scholar
  5. Bennett, R. E., Morley, M. & Quardt, D. (1998, April). Three response types for broadening the conception of mathematical problem solving in computerized-adaptive tests. Paper presented at the annual meeting of the National Council of Measurement in Education, San Diego.Google Scholar
  6. Braun, H. (1994). Assessing technology in assessment. In E. A. Baker & H. F. O’Neil (Eds.), Technology assessment in education and training (pp. 231–246). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  7. Braun, H., Bejar, I. I. & Williamson, D. M. (2006). Rule-based methods for automated scoring: Application in a licensing context. In D. M. Williamson, I. I. Bejar & R. J. Mislevy (Eds.), Automated scoring of complex tasks in computer-based testing (pp. 83–122). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  8. Chandler, L., Zimmerman, L., Castro, J. & Way, W. D. (2006). Using technology to create innovative state science assessments: Pilots and policy. Presentation at the Council of Chief State School Officers Annual Conference on Large-Scale Assessment, San Francisco, CA.Google Scholar
  9. Drasgow, F., Olson-Buchanan, J. B. & Moberg, P. J. (1999). Development of an interactive video assessment: Trials and tribulations. In F. Drasgow & J. B. Olson-Buchanan, (Eds.), Innovations in computerized assessment. (pp 177–196). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  10. Fitch, W. T. & Kramer, G. (1994). Sonifying the Body Electric: Superiority of an auditory over a visual display in a complex, multivariate system. In G. Kramer (Ed.), Auditory Display, (pp. 307–325). Reading, MA: Addison-Wesley.Google Scholar
  11. Harmes, J. C. & Parshall, C. G. (2000). An iterative process for computerized test development: Integrating usability methods. Paper presented at the annual meeting of the Florida Educational Research Association, Tallahassee, FL.Google Scholar
  12. Harmes, J. C. & Parshall, C. G. (2005). Situated tasks and simulated environments: A look into the future for innovative computerized assessment. Paper presented at the annual meeting of the Florida Educational Research Association. Miami.Google Scholar
  13. Harmes, J. C., Parshall, C. G., Rendina-Gobioff, G., Jones, P. K., Githens, M. & Dennard, A. (2004, November). Integrating usability methods into the CBT development process: Case study of a technology literacy assessment. Paper presented at the annual meeting of the Florida Educational Research Association, Tampa, FL.Google Scholar
  14. Huff, K. L & Sireci, S. G. (2001, Fall). Validity issues in computer-based testing. Educational Measurement: Issues and Practice, 20, 16–25.CrossRefGoogle Scholar
  15. Koch, D. A. (1993). Testing goes graphical. Journal of Interactive Instruction Development, 5, 14–21.Google Scholar
  16. Luecht, R. M. & Clauser, B. E. (2002). Test models for complex computer-based testing. In C. N. Mills, M. T. Potenza, J. J. Fremer & W. C. Ward (Eds.), Computer-based testing: Building the foundation for future assessments (pp. 89–102). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  17. Margolis, M. J. & Clauser, B. E. (2006). A regression-based procedure for automated scoring of a complex medical performance assessment. In D. M. Williamson, I. I. Bejar & R. J. Mislevy (Eds.), Automated scoring of complex tasks in computer-based testing (pp. 123–168). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  18. Melnick, D. E. & Clauser, B. E (2006). Computer based testing for professional licensing and certification of health professionals. In D. Bartram & R. K. Hambleton (Eds.), Computer-based testing and the Internet: Issues and advances (pp. 163–186). West Sussex, England: Wiley.Google Scholar
  19. Mislevy, R. J., Steinberg, L. S., Breyer, F. J., Almond, R. G. & Johnson, L. (2002). Making sense of data from complex assessments. Applied Measurement in Education, 15, 363–389.CrossRefGoogle Scholar
  20. National Board of Medical Examiners. (2004, Fall/Winter). Continuing developments in computer-based testing. NBME Examiner. Retrieved May 9, 2005, from http://www. nbme.org/Examiners/fallwinter2004/news2.asp.
  21. National Council of Architectural Registration Boards. (2004). ARE Guidelines 3.0. Retrieved April 9, 2005, from http://www.ncarb.org/are/Areguide.html.
  22. National Council of State Boards of Nursing. (2005). Fast facts about alternate item formats and the NCLEX examination. Retrieved April 20, 2005, from http://www.ncsbn.org/pdfs/01_08_04_Alt_Itm.pdf.
  23. Olson-Buchanan, J. B., Drasgow, F., Moberg, P. J., Mead, A. D., P. A. Keenan & M. A. Donovan (1998). Interactive video assessment of conflict resolution skills. Personnel Psychology, 51, 1–24.CrossRefGoogle Scholar
  24. O’Neill, K. & Folk, V. (1996, April). Innovative CBT item formats in a teacher licensing program. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.Google Scholar
  25. Parshall, C. G. (1999, February). Audio CBTs: Measuring more through the use of speech and non-speech sound. Paper presented at the annual meeting of the National Council on Measurement in Education, Montreal, Canada.Google Scholar
  26. Parshall, C. G. & Balizet, S. (2001). Audio computer-based tests (CBTs): An initial framework for the use of sound in computerized tests. Educational Measurement: Issues and Practice, 20, 5–15.CrossRefGoogle Scholar
  27. Parshall, C. G., Davey, T. & Pashley, P. (2000). Innovative item types for computerized testing. In W. J. van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice. (pp. 129–148). Boston: Kluwer-Nijhof Publishing.Google Scholar
  28. Parshall, C. G., Spray, J. A., Kalohn, J. C. & Davey, T. (2002). Practical considerations in computer-based testing. New York: Springer-Verlag.MATHGoogle Scholar
  29. Parshall, C. G., Stewart, R & Ritter, J. (1996, April). Innovations: Sound, graphics, and alternative response modes. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.Google Scholar
  30. Scalise, K. & Gifford, B. (2006). Computer-based assessment in e-learning: A framework for constructing ”Intermediate Constraint” questions and tasks for technology platforms. Journal of Technology, Learning, and Assessment, 4. Retrieved January 29, 2007, from http://www.jtla.org.
  31. Shea, J. A., Norcini, J. J., Baranowski, R. A., Langdon, L. O. & Popp, R. L. (1992). A comparison of video and print formats in the assessment of skill in interpreting cardiovascular motion studies. Evaluation and the Health Professions, 15, 325–340.CrossRefGoogle Scholar
  32. Shermis, M. D. & Burstein, J. (Eds.), (2003). Automated essay scoring: A cross-disciplinary perspective. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  33. Sireci, S. G. & Zenisky, A. L. (2006). Innovative item formats in computer-based testing: In pursuit of improved construct representations. In S. M. Downing & T. M. Haladyna, (Eds.), Handbook of test development (pp. 329–347). Mahwah, NJ: Lawrence Earlbaum Associates.Google Scholar
  34. van der Linden, W. J. (2002). On complexity in CBTs. In C. N. Mills, M. T. Potenza, J. J. Fremer & W. C. Ward (Eds.), Computer-based testing: Building the foundation for future assessments (pp. 89-102). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  35. Vicino, F. L. & Moreno, K. E. (1997). Human factors in the CAT system: A pilot study. In W. A. Sands, B. K. Waters & J. R. McBride (Eds.), Computerized adaptive testing: From inquiry to operation (pp. 157–160). Washington, DC: APA.CrossRefGoogle Scholar
  36. Vispoel, W. P., Wang, T. & Bleiler, T. (1997). Computerized adaptive and fixed-item testing of music listening skill: A comparison of efficiency, precision, and concurrent validity. Journal of Educational Measurement, 34, 43–63.CrossRefGoogle Scholar
  37. Williamson, D. M., Bejar, I. I. & Hone, A. S. (1999). “Mental model” comparison of automated and human scoring. Journal of Educational Measurement, 36 158–184.CrossRefGoogle Scholar
  38. Williamson, D. M., Mislevy, R. J. & Bejar, I. I. (Eds.), (2006). Automated scoring of complex tasks in computer-based testing. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  39. Zenisky, A. L. & Sireci, S. G. (2002). Technological innovations in large-scale assessment. Applied Measurement in Education, 15, 337–362.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Cynthia G. Parshall
    • 1
  • J. Christine Harmes
    • 2
  • Tim Davey
    • 3
  • Peter J. Pashley
    • 4
  1. 1.Measurement ConsultantTemple TerraceUSA
  2. 2.The Center for Assessment and Research StudiesJames Madison UniversityHarrisonburgUSA
  3. 3.Educational Testing ServicePrincetonUSA
  4. 4.Law School Admission CouncilNewtownUSA

Personalised recommendations