Advertisement

Designing Item Pools for Adaptive Testing

  • Bernard P. Veldkamp
  • Wim J. van der Linden
Chapter
Part of the Statistics for Social and Behavioral Sciences book series (SSBS)

Abstract

In existing adaptive testing programs, each successive item in the test is chosen to optimize an objective. Examples of well-known objectives aremaximizing the information in the test at the ability estimate for the test taker or minimizing the deviation of its information froma target value at the estimate. In addition, item selection is required to realize a set of content specifications for the test. For example, item content may be required to follow a certain taxonomy or the answer-key distribution for the test must not deviate too much from uniformity. Content specifications are generally defined in terms of combinations of attributes the items in the test should have. They are typically realized by imposing a set of constraints on the item-selection process. The presence of both an objective and a set of constraints in adaptive testing leads to the notion of adaptive testing as constrained (sequential) optimization problem; for a more formal introduction to this notion, see van der Linden (this volume, chap. 2).

Keywords

Design Space Design Point Stimulus Level Exposure Rate Item Bank 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ariel, A., van der Linden, W. J. & Veldkamp, B. P. (2006). A strategy for optimizing item pool management. Journal of Educational Measurement, 43, 85–96.CrossRefGoogle Scholar
  2. Ariel, A., Veldkamp, B. P. & van der Linden, W. J. (2004). Constructing rotating item pools for constrained adaptive testing. Journal of Educational Measurement, 41, 345–359.CrossRefGoogle Scholar
  3. Belov, D. I. & Armstrong, R. D. (2005). Monte Carlo test assembly for item pool analysis and extension. Applied Psychological Measurement, 29, 239–261.CrossRefMathSciNetGoogle Scholar
  4. Boekkooi-Timminga, E. (1991). A method for designing Rasch model based item banks. Paper presented at the annual meeting of the Psychometric Society, Princeton NJ.Google Scholar
  5. Flaugher, R. (1990). Item pools. In H. Wainer (Ed.), Computerized adaptive testing: A primer (pp. 41–64). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  6. Gulliksen, H. (1950). Theory of mental tests. New York: Wiley.Google Scholar
  7. ILOG, Inc. (2003). CPLEX9.0 [Computer program]. Incline Village, NV: Author.Google Scholar
  8. Revuelta, J. & Ponsoda, V. (1998). A comparison of item exposure control methods in computerized adaptive testing. Journal of Educational Measurement, 35, 311–327.CrossRefGoogle Scholar
  9. Stocking, M. L. & Lewis, C. (1998). Controlling item exposure conditional on ability in computerized adaptive testing. Journal of Educational and Behavioral Statistics, 23, 57–75.Google Scholar
  10. Stocking, M. L. & Lewis, C. (2000). Methods of controlling the exposure of items in CAT. In W. J. van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 163–182). Boston: Kluwer-Nijhof Publishing.Google Scholar
  11. Stocking, M. L. & Swanson, L. (1998). Optimal design of item banks for computerized adaptive testing. Applied Psychological Measurement, 22, 271–279.CrossRefGoogle Scholar
  12. Swanson, L. & Stocking, M. L. (1993). A model and heuristic for solving very large item selection problems. Applied Psychological Measurement, 17, 151–166.CrossRefGoogle Scholar
  13. Sympson, J. B. & Hetter, R. D. (1985). Controlling item exposure rates in computerized adaptive testing. Proceedings of the 27th Annual Meeting of the Military Testing Association (pp. 973–77). San Diego: Navy Personnel Research and Development Center.Google Scholar
  14. van der Linden, W. J. (2003). Some alternatives to Sympson-Hetter item-exposure control in computerized adaptive testing. Journal of Educational and Behavioral Statistics, 28, 249–265.CrossRefGoogle Scholar
  15. van der Linden, W. J. (2005). Linear models for optimal test design. New York: Springer-Verlag.MATHGoogle Scholar
  16. van der Linden, W. J., Ariel, A. & Veldkamp, B. P. (2006). Assembling a CAT item pool as a set of linear tests. Journal of Educational and Behavioral Statistics, 31, 81–100.CrossRefGoogle Scholar
  17. van der Linden, W. J. & Reese, L. M. (1998). A model for optimal constrained adaptive testing. Applied Psychological Measurement, 22, 259–270.CrossRefGoogle Scholar
  18. van der Linden, W. J. & Veldkamp, B. P. (2004). Constraining item exposure in computerized adaptive testing with shadow tests. Journal of Educational and Behavioral Statistics, 29, 273–291.CrossRefGoogle Scholar
  19. van der Linden, W. J. & Veldkamp, B. P. (2007). Conditional item-exposure control in adaptive testing using item-ineligibility probabilities. Journal of Educational and Behavioral Statistics, 32, 398–418.CrossRefGoogle Scholar
  20. van der Linden, W. J., Veldkamp, B. P. & Reese, L. M. (2000). An programming approach to item bank design. Applied Psychological Measurement, 24, 139–150.CrossRefGoogle Scholar
  21. Veldkamp, B. P. & van der Linden, W. J. (2002). Multidimensional constrained adaptive testing. Psychometrika, 67, 575–588.CrossRefMathSciNetGoogle Scholar
  22. Way, W. D. & Steffen, M. (1998, April). Strategies for managing item pools to maximize item security. Paper presented at Annual Meeting of the National Council on Educational Measurement, San Diego.Google Scholar
  23. Way, W. D., Steffen, M. & Anderson, G. S. (1998). Developing, maintaining, and renewing the item inventory to support computer-based testing. In C. N. Mills, M. Potenza, J. J. Fremer & W. Ward (Eds.), Computer-based testing: Building the foundation for future assessments (pp. 89–102). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Bernard P. Veldkamp
    • 1
  • Wim J. van der Linden
    • 2
  1. 1.Department of Research Methodology, Measurement, and Data AnalysisUniversity of TwenteEnschedeThe Netherlands
  2. 2.CTB/McGraw-HillMontereyUSA

Personalised recommendations