Skip to main content

Technological Issues for Computer-Based Assessment

  • Chapter
  • First Online:
Assessment and Teaching of 21st Century Skills

Abstract

This chapter reviews the contribution of new information-communication technologies to the advancement of educational assessment. Improvements can be described in terms of precision in detecting the actual values of the observed variables, efficiency in collecting and processing information, and speed and frequency of feedback given to the participants and stakeholders. The chapter reviews previous research and development in two ways, describing the main tendencies in four continents (Asia, Australia, Europe and the US) as well as summarizing research on how technology advances assessment in certain crucial dimensions (assessment of established constructs, extension of assessment domains, assessment of new constructs and in dynamic situations). As there is a great variety of applications of assessment in education, each one requiring different technological solutions, the chapter classifies assessment domains, purposes and contexts and identifies the technological needs and solutions for each. The chapter reviews the contribution of technology to the advancement of the entire educational evaluation process, from authoring and automatic generation and storage of items, through delivery methods (Internet-based, local server, removable media, mini-computer labs) to forms of task presentation made possible with technology for response capture, scoring and automated feedback and reporting. Finally, the chapter identifies areas for which further research and development is needed (migration strategies, security, availability, accessibility, comparability, framework and instrument compliance) and lists themes for research projects feasible for inclusion in the Assessment and Teaching of Twenty-first Century Skills project.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The assessment instrument integrated software from four different providers on a Microsoft Windows XT platform. The two key components of the software package were developed by SkillCheck Inc. (Boston, MA) and SoNet Software (Melbourne, Australia). The SkillCheck system provided the software responsible for delivering the assessment items and capturing student data. The SkillCheck system also provided the simulation, short constructed response and multiple-choice item platforms. The SoNet software enabled live software applications (such as Microsoft Word) to be run within the global assessment environment and for the resultant student products to be saved for later grading.

  2. 2.

    See http://www.tba.dipf.de/index.php?option=com_content&task=view&id=25&Itemid=33 for the mission statement of the research unit.

  3. 3.

    See http://kompetenzmodelle.dipf.de/en/projects.

  4. 4.

    The contributions of Julian Fraillon of ACER and Mike Janic of SoNET systems to these thoughts are acknowledged.

References

  • ACT. COMPASS. http://www.act.org/compass/

  • Ainley, M. (2006). Connecting with learning: Motivation, affect and cognition in interest processes. Educational Psychology Review, 18(4), 391–405.

    Google Scholar 

  • Ainley, J., Eveleigh, F., Freeman, C., & O’Malley, K. (2009). ICT in the teaching of science and mathematics in year 8 in Australia: A report from the SITES survey. Canberra: Department of Education, Employment and Workplace Relations.

    Google Scholar 

  • American Psychological Association (APA). (1986). Guidelines for computer-based tests and interpretations. Washington, DC: American Psychological Association.

    Google Scholar 

  • Anderson, R., & Ainley, J. (2010). Technology and learning: Access in schools around the world. In B. McGaw, E. Baker, & P. Peterson (Eds.), International encyclopedia of education (3rd ed.). Amsterdam: Elsevier.

    Google Scholar 

  • Baker, E. L., Niemi, D., & Chung, G. K. W. K. (2008). Simulations and the transfer of problem-solving knowledge and skills. In E. Baker, J. Dickerson, W. Wulfeck, & H. F. O’Niel (Eds.), Assessment of problem solving using simulations (pp. 1–17). New York: Lawrence Erlbaum Associates.

    Google Scholar 

  • Ball, S., et al. (2006). Accessibility in e-assessment guidelines final report. Commissioned by TechDis for the E-Assessment Group and Accessible E-Assessment. Report prepared by Edexcel. August 8, 2011. Available: http://escholarship.bc.edu/ojs/index.php/jtla/article/view/1663

  • Bejar, I. I., Lawless, R. R., Morley, M. E., Wagner, M. E., Bennett, R. E., & Revuelta, J. (2003). A feasibility study of on-the-fly item generation in adaptive testing. Journal of Technology, Learning and Assessment, 2(3). August 8, 2011. Available: http://escholarship.bc.edu/ojs/index.php/jtla/article/view/1663

  • Bennett, R. E. (2001). How the Internet will help large-scale assessment reinvent itself. Education Policy Analysis Archives, 9(5). Available: http://epaa.asu.edu/epaa/v9n5.html

  • Bennett, R. E. (2006). Moving the field forward: Some thoughts on validity and automated scoring. In D. M. Williamson, R. J. Mislevy, & I. I. Bejar (Eds.), Automated scoring of complex tasks in computer-based testing (pp. 403–412). Mahwah: Erlbaum.

    Google Scholar 

  • Bennett, R. (2007, September). New item types for computer-based tests. Presentation given at the seminar, What is new in assessment land 2007, National Examinations Center, Tbilisi. Retrieved January 19, 2011, from http://www.naec.ge/uploads/documents/2007-SEM_Randy-Bennett.pdf

  • Bennett, R. E. (2009). A critical look at the meaning and basis of formative assessment (RM-09–06). Princeton: Educational Testing Service.

    Google Scholar 

  • Bennett, R. E., & Bejar, I. I. (1998). Validity and automated scoring: It’s not only the scoring. Educational Measurement: Issues and Practice, 17(4), 9–17.

    Google Scholar 

  • Bennett, R. E., Morley, M., & Quardt, D. (1998). Three response types for broadening the conception of mathematical problem solving in computerized-adaptive tests (RR-98–45). Princeton: Educational Testing Service.

    Google Scholar 

  • Bennett, R. E., Goodman, M., Hessinger, J., Ligget, J., Marshall, G., Kahn, H., & Zack, J. (1999). Using multimedia in large-scale computer-based testing programs. Computers in Human Behaviour, 15, 283–294.

    Google Scholar 

  • Bennett, R. E., Morley, M., & Quardt, D. (2000). Three response types for broadening the conception of mathematical problem solving in computerized tests. Applied Psychological Measurement, 24, 294–309.

    Google Scholar 

  • Bennett, R. E., Jenkins, F., Persky, H., & Weiss, A. (2003). Assessing complex problem-solving performances. Assessment in Education, 10, 347–359.

    Google Scholar 

  • Bennett, R. E., Persky, H., Weiss, A. R., & Jenkins, F. (2007). Problem solving in technology-rich environments: A report from the NAEP technology-based assessment project (NCES 2007–466). Washington, DC: National Center for Education Statistics, US Department of Education. Available: http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2007466

  • Bennett, R. E., Braswell, J., Oranje, A., Sandene, B, Kaplan, B., & Yan, F. (2008). Does it matter if I take my mathematics test on computer? A second empirical study of mode effects in NAEP. Journal of Technology, Learning and Assessment, 6(9). Available: http://escholarship.bc.edu/jtla/vol6/9/

  • Bennett, R. E., Persky, H., Weiss, A., & Jenkins, F. (2010). Measuring problem solving with technology: A demonstration study for NAEP. Journal of Technology, Learning, and Assessment, 8(8). Available: http://escholarship.bc.edu/jtla/vol8/8

  • Ben-Simon, A., & Bennett, R. E. (2007). Toward more substantively meaningful automated essay scoring. Journal of Technology, Learning and Assessment, 6(1). Available: http://escholarship.bc.edu/jtla/vol6/1/

  • Bergholtz, M., Grégoire, B., Johannesson, P., Schmitt, M., Wohed, P., & Zdravkovic, J. (2005). Integrated methodology for linking business and process models with risk mitigation. International Workshop on Requirements Engineering for Business Need and IT Alignment (REBNITA 2005), Paris, August 2005. http://efficient.citi.tudor.lu/cms/efficient/content.nsf/0/4A938852840437F2C12573950056F7A9/$file/Rebnita05.pdf

  • Berglund, A., Boag, S., Chamberlin, D., Fernández, M., Kay, M., Robie, J., & Siméon, J. (Eds.) (2007). XML Path Language (XPath) 2.0. W3C Recommendation 23 January 2007. http://www.w3.org/TR/2007/REC-xpath20–20070123/

  • Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The semantic web: A new form of web that is meaningful to computers will unleash a revolution of new possibilities. Scientific American, 284, 34–43.

    Google Scholar 

  • Bernstein, H. (2000). Recent changes to RasMol, recombining the variants. Trends in Biochemical Sciences (TIBS), 25(9), 453–455.

    Google Scholar 

  • Blech, C., & Funke, J. (2005). Dynamis review: An overview about applications of the dynamis approach in cognitive psychology. Bonn: Deutsches Institut für Erwachsenenbildung. Available: http://www.die-bonn.de/esprid/dokumente/doc-2005/blech05_01.pdf

  • Bloom, B. S. (1969). Some theoretical issues relating to educational evaluation. In R. W. Tyler (Ed.), Educational evaluation: New roles, new means. The 63rd yearbook of the National Society for the Study of Education, part 2 (Vol. 69) (pp. 26–50). Chicago: University of Chicago Press.

    Google Scholar 

  • Booth, D., & Liu, K. (Eds.) (2007). Web Services Description Language (WSDL) Version 2.0 Part 0: Primer. W3C Recommendation 26 June 2007. http://www.w3.org/TR/2007/REC-wsdl20-primer-20070626

  • Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment & Evaluation in Higher Education, 24(4), 413–426.

    Google Scholar 

  • Bray, T., Paoli, J., Sperberg-McQueen, C., Maler, E., & Yergeau, F., Cowan, J. (Eds.) (2006). XML 1.1 (2nd ed.), W3C Recommendation 16 August 2006. http://www.w3.org/TR/2006/REC-xml11–20060816/

  • Bray, T., Paoli, J., Sperberg-McQueen, C., Maler, E., & Yergeau, F. (Eds.) (2008). Extensible Markup Language (XML) 1.0 (5th ed.) W3C Recommendation 26 November 2008. http://www.w3.org/TR/2008/REC-xml-20081126/

  • Brickley, D., & Guha, R. (2004). RDF vocabulary description language 1.0: RDF Schema. W3C Recommandation. http://www.w3.org/TR/2004/REC-rdf-schema-20040210/

  • Bridgeman, B. (2009). Experiences from large-scale computer-based testing in the USA. In F. Scheuermann, & J. Björnsson (Eds.), The transition to computer-based assessment. New approaches to skills assessment and implications for large-scale testing (pp. 39–44). Luxemburg: Office for Official Publications of the European Communities.

    Google Scholar 

  • Bridgeman, B., Lennon, M. L., & Jackenthal, A. (2003). Effects of screen size, screen resolution, and display rate on computer-based test performance. Applied Measurement in Education, 16, 191–205.

    Google Scholar 

  • Carlisle, D., Ion, P., Miner, R., & Poppelier, N. (Eds.) (2003). Mathematical Markup Language (MathML) Version 2.0 (2nd ed.). W3C Recommendation 21 October 2003. http://www.w3.org/TR/2003/REC-MathML2–20031021/

  • Carnegie Learning. Cognitive Tutors. http://www.carnegielearning.com/products.cfm

  • Catts, R., & Lau, J. (2008). Towards information literacy indicators. Paris: UNESCO.

    Google Scholar 

  • Chatty, S., Sire, S., Vinot J.-L., Lecoanet, P., Lemort, A., & Mertz, C. (2004). Revisiting visual interface programming: Creating GUI tools for designers and programmers. Proceedings of UIST’04, October 24–27, 2004, Santa Fe, NM, USA. ACM Digital Library.

    Google Scholar 

  • Clement, L., Hately, A., von Riegen, C., & Rogers, T. (2004). UDDI Version 3.0.2, UDDI Spec Technical Committee Draft, Dated 20041019. Organization for the Advancement of Structured Information Standards (OASIS). http://uddi.org/pubs/uddi-v3.0.2–20041019.htm

  • Clyman, S. G., Melnick, D. E., & Clauser, B. E. (1995). Computer-based case simulations. In E. L. Mancall & P. G. Bashook (Eds.), Assessing clinical reasoning: The oral examination and alternative methods (pp. 139–149). Evanston: American Board of Medical Specialties.

    Google Scholar 

  • College Board. ACCUPLACER. http://www.collegeboard.com/student/testing/accuplacer/

  • Conole, G., & Waburton, B. (2005). A review of computer-assisted assessment. ALT-J, Research in Learning Technology, 13(1), 17–31.

    Google Scholar 

  • Corbiere, A. (2008). A framework to abstract the design practices of e-learning system projects. In IFIP international federation for information processing, Vol. 275; Open Source Development, Communities and Quality; Barbara Russo, Ernesto Damiani, Scott Hissam, Björn Lundell, Giancarlo Succi (pp. 317–323). Boston: Springer.

    Google Scholar 

  • Cost, R., Finin, T., Joshi, A., Peng, Y., Nicholas, C., Soboroff, I., Chen, H., Kagal, L., Perich, F., Zou, Y., & Tolia, S. (2002). ITalks: A case study in the semantic web and DAML+OIL. IEEE Intelligent Systems, 17(1), 40–47.

    Google Scholar 

  • Cross, R. (2004a). Review of item banks. In N. Sclater (Ed.), Final report for the Item Bank Infrastructure Study (IBIS) (pp. 17–34). Bristol: JISC.

    Google Scholar 

  • Cross, R. (2004b). Metadata and searching. In N. Sclater (Ed.), Final report for the Item Bank Infrastructure Study (IBIS) (pp. 87–102). Bristol: JISC.

    Google Scholar 

  • Csapó, B., Molnár, G., & R. Tóth, K. (2009). Comparing paper-and-pencil and online assessment of reasoning skills. A pilot study for introducing electronic testing in large-scale assessment in Hungary. In F. Scheuermann & J. Björnsson (Eds.), The transition to computer-based assessment. New approaches to skills assessment and implications for large-scale testing (pp. 113–118). Luxemburg: Office for Official Publications of the European Communities.

    Google Scholar 

  • CTB/McGraw-Hill. Acuity. http://www.ctb.com/products/product_summary.jsp?FOLDER%3C%3Efolder_id=1408474395292638

  • Decker, S., Melnik, S., Van Harmelen, F., Fensel, D., Klein, M., Broekstra, J., Erdmann, M., & Horrocks, I. (2000). The semantic web: The roles of XML and RDF. IEEE Internet Computing, 15(5), 2–13.

    Google Scholar 

  • Dillenbourg, P., Baker, M., Blaye, A., & O’Malley, C. (1996). The evolution of research on collaborative learning. In E. Spada & P. Reiman (Eds.), Learning in humans and machine: Towards an interdisciplinary learning science (pp. 189–211). Oxford: Elsevier.

    Google Scholar 

  • Draheim, D., Lutteroth, C., & Weber G. (2006). Graphical user interface as documents. In CHINZ 2006—Design Centred HCI, July 6–7, 2006, Christchurch. ACM digital library.

    Google Scholar 

  • Drasgow, F., Luecht, R. M., & Bennett, R. E. (2006). Technology and testing. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 471–515). Westport: American Council on Education/Praeger.

    Google Scholar 

  • EDB (Education Bureau of the Hong Kong SAR Government) (2007). Right Technology at the Right Time for the Right Task. Author: Hong Kong.

    Google Scholar 

  • Educational Testing Service (ETS). Graduate Record Examinations (GRE). http://www.ets.org/portal/site/ets/menuitem.fab2360b1645a1de9b3a0779f1751509/?vgnextoid=b195e3b5f64f4010VgnVCM10000022f95190RCRD

  • Educational Testing Service (ETS). Test of English as a foreign language iBT (TOEFL iBT). http://www.ets.org/portal/site/ets/menuitem.fab2360b1645a1de9b3a0779f1751509/?vgnextoid=69c0197a484f4010VgnVCM10000022f95190RCRD&WT.ac=Redirect_ets.org_toefl

  • Educational Testing Service (ETS). TOEFL practice online. http://toeflpractice.ets.org/

  • Eggen, T., & Straetmans, G. (2009). Computerised adaptive testing at the entrance of primary school teacher training college. In F. Sheuermann & J. Björnsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 134–144). Luxemburg: Office for Official Publications of the European Communities.

    Google Scholar 

  • EMB (Education and Manpower Bureau HKSAR) (2001). Learning to learn – The way forward in curriculum. Retrieved September 11, 2009, from http://www.edb.gov.hk/index.aspx?langno=1&nodeID=2877

  • Ferraiolo, J., Jun, J., & Jackson, D. (2009). Scalable Vector Graphics (SVG) 1.1 specification. W3C Recommendation 14 January 2003, edited in place 30 April 2009. http://www.w3.org/TR/2003/REC-SVG11–20030114/

  • Feurzeig, W., & Roberts, N. (1999). Modeling and simulation in science and mathematics education. New York: Springer.

    Google Scholar 

  • Flores, F.,Quint, V., & Vatton, I. (2006). Templates, microformats and structured editing. Proceedings of DocEng’06, ACM Symposium on Document Engineering, 10–13 October 2006 (pp. 188–197), Amsterdam, The Netherlands.

    Google Scholar 

  • Gallagher, A., Bennett, R. E., Cahalan, C., & Rock, D. A. (2002). Validity and fairness in technology-based assessment: Detecting construct-irrelevant variance in an open-ended computerized mathematics task. Educational Assessment, 8, 27–41.

    Google Scholar 

  • Gašević, D., Jovanović, J., & Devedžić, V. (2004). Ontologies for creating learning object content. In M. Gh. Negoita, et al. (Eds.), KES 2004, LNAI 3213 (pp. 284–291). Berlin/Heidelberg: Springer.

    Google Scholar 

  • Graduate Management Admission Council (GMAC). Graduate Management Admission Test (GMAT). http://www.mba.com/mba/thegmat

  • Greiff, S., & Funke, J. (2008). Measuring complex problem solving: The MicroDYN approach. Heidelberg: Unpublished manuscript. Available: http://www.psychologie.uni-heidelberg.de/ae/allg/forschun/dfg_komp/Greiff&Funke_2008_MicroDYN.pdf

  • Grubber, T. (1993). A translation approach to portable ontology specifications. Knowledge Acquisition, 5, 199–220.

    Google Scholar 

  • Gruber, T. (1991 April). The role of common ontology in achieving sharable, reuseable knowledge bases. Proceedings or the Second International Conference on Principles of Knowledge Representation and Reasoning (pp. 601–602). Cambridge, MA: Morgan Kaufmann.

    Google Scholar 

  • Guarino, N., & Giaretta, P. (1995). Ontologies and knowledge bases: Towards a terminological clarification. In N. Mars (Ed.), Towards very large knowledge bases: Knowledge building and knowledge sharing (pp. 25–32). Amsterdam: Ios Press.

    Google Scholar 

  • Gudgin, M., Hadley, M., Mendelsohn, N., Moreau, J. -J., Nielsen, H., Karmarkar, A., & Lafon, Y. (Eds.) (2007). SOAP Version 1.2 Part 1: Messaging framework (2nd ed.). W3C Recommendation 27 April 2007. http://www.w3.org/TR/2007/REC-soap12-part1–20070427/

  • Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1997). Analysis of global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17(4), 397–431.

    Google Scholar 

  • Hadwin, A., Winne, P., & Nesbit, J. (2005). Roles for software technologies in advancing research and theory in educational psychology. The British Journal of Educational Psychology, 75, 1–24.

    Google Scholar 

  • Haldane, S. (2009). Delivery platforms for national and international computer based surveys. In F. Sheuermann & J. Björnsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 63–67). Luxemburg: Office for Official Publications of the European Communities.

    Google Scholar 

  • Halldórsson, A., McKelvie, P., & Bjornsson, J. (2009). Are Icelandic boys really better on computerized tests than conventional ones: Interaction between gender test modality and test performance. In F. Sheuermann & J. Björnsson (Eds.), The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing (pp. 178–193). Luxemburg: Office for Official Publications of the European Communities.

    Google Scholar 

  • Hendler, J. (2001). Agents and the semantic web. IEEE Intelligent Systems, 16(2), 30–37.

    Google Scholar 

  • Henri, F. (1992). Computer conferencing and content analysis. In A. R. Kaye (Ed.), Collaborative learning through computer conferencing (pp. 117–136). Berlin: Springer.

    Google Scholar 

  • Herráez, A. (2007). How to use Jmol to study and present molecular structures (Vol. 1). Morrisville: Lulu Enterprises.

    Google Scholar 

  • Horkay, N., Bennett, R. E., Allen, N., & Kaplan, B. (2005). Online assessment in writing. In B. Sandene, N. Horkay, R. E. Bennett, N. Allen, J. Braswell, B. Kaplan, & A. Oranje (Eds.), Online assessment in mathematics and writing: Reports from the NAEP technology-based assessment project (NCES 2005–457). Washington, DC: National Center for Education Statistics, US Department of Education. Available: http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2005457

  • Horkay, N., Bennett, R. E., Allen, N., Kaplan, B., & Yan, F. (2006). Does it matter if I take my writing test on computer? An empirical study of mode effects in NAEP. Journal of Technology, Learning and Assessment, 5(2). Available: http://escholarship.bc.edu/jtla/vol5/2/

  • IEEE LTSC (2002). 1484.12.1-2002 IEEE Standard for Learning Object Metadata. Computer Society/Learning Technology Standards Committee. http://www.ieeeltsc.org:8080/Plone/working-group/learning-object-metadata-working-group-12.

  • IMS (2006). IMS question and test interoperability overview, Version 2.0 Final specification. IMS Global Learning Consortium, Inc. Available: http://www.imsglobal.org/question/qti_v2p0/imsqti_oviewv2p0.html

  • International ICT Literacy Panel (Educational Testing Service). (2002). Digital transformation: A framework for ICT literacy. Princeton: Educational Testing Service.

    Google Scholar 

  • Jadoul, R., & Mizohata, S. (2006). PRECODEM, an example of TAO in service of employment. IADIS International Conference on Cognition and Exploratory Learning in Digital Age, CELDA 2006, 8–10 December 2006, Barcelona. https://www.tao.lu/downloads/publications/CELDA2006_PRECODEM_paper.pdf

  • Jadoul, R., & Mizohata, S. (2007). Development of a platform dedicated to collaboration in the social sciences. Oral presentation at IADIS International Conference on Cognition and Exploratory Learning in Digital Age, CELDA 2007, 7–9 December 2007, Carvoeiro. https://www.tao.lu/downloads/publications/CELDA2007_Development_of_a_Platform_paper.pdf

  • Jadoul, R., Plichart, P., Swietlik, J., & Latour, T. (2006). eXULiS – a Rich Internet Application (RIA) framework used for eLearning and eTesting. IV International Conference on Multimedia and Information and Communication Technologies in Education, m-ICTE 2006. 22–25 November, 2006, Seville. In A. Méndez-Vilas, A. Solano Martin, J. Mesa González, J. A. Mesa González (Eds.), Current developments in technology-assisted education, Vol. 2. FORMATEX, Badajoz (2006), pp. 851–855. http://www.formatex.org/micte2006/book2.htm

  • Johnson, M., & Green, S. (2006). On-line mathematics assessment: The impact of mode on performance and question answering strategies. Journal of Technology, Learning, and Assessment, 4(5), 311–326.

    Google Scholar 

  • Kamareddine, F., Lamar, R., Maarek, M., & Wells, J. (2007). Restoring natural language as a computerized mathematics input method. In M. Kauers, et al. (Eds.), MKM/Calculemus 2007, LNAI 4573 (pp. 280–295). Berlin/Heidelberg: Springer. http://dx.doi.org/10.1007/978–3–540–73086–6_23

  • Kamareddine, F., Maarek, M., Retel, K., & Wells, J. (2007). Narrative structure of mathematical texts. In M. Kauers, et al. (Eds.), MKM/Calculemus 2007, LNAI 4573 (pp. 296–312). Berlin/Heidelberg: Springer. http://dx.doi.org/10.1007/978–3–540–73086–6_24

  • Kane, M. (2006). Validity. In R. L. Linn (Ed.), Educational Measurement (4th ed., pp. 17–64). New York: American Council on Education, Macmillan Publishing.

    Google Scholar 

  • Kay, M. (Ed.) (2007). XSL Transformations (XSLT) Version 2.0. W3C Recommendation 23 January 2007. http://www.w3.org/TR/2007/REC-xslt20–20070123/

  • Kelley, M., & Haber, J. (2006). National Educational Technology Standards for Students (NETS*S): Resources for assessment. Eugene: The International Society for Technology and Education.

    Google Scholar 

  • Kerski, J. (2003). The implementation and effectiveness of geographic information systems technology and methods in secondary education. Journal of Geography, 102(3), 128–137.

    Google Scholar 

  • Khang, J., & McLeod, D. (1998). Dynamic classificational ontologies: Mediation of information sharing in cooperative federated database systems. In M. P. Papazoglou & G. Sohlageter (Eds.), Cooperative information systems: Trends and direction (pp. 179–203). San Diego: Academic.

    Google Scholar 

  • Kia, E., Quint, V., & Vatton, I. (2008). XTiger language specification. Available: http://www.w3.org/Amaya/Templates/XTiger-spec.html

  • Kingston N. M. (2009). Comparability of computer- and paper-administered multiple-choice tests for K-12 populations: A synthesis. Applied Measurement in Education, 22(1), 22–37.

    Google Scholar 

  • Klyne, G., & Carrol, J. (2004). Resource description framework (RDF): Concepts and abstract syntax. W3C Recommendation. http://www.w3.org/TR/2004/REC-rdf-concepts-20040210/

  • Koretz, D. (2008). Measuring up. What educational testing really tells us. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Kyllonen, P. (2009). New constructs, methods and directions for computer-based assessment. In F. Sheuermann & J. Björnsson (Eds.), The transition to computer-based assessment. New approaches to skills assessment and implications for large-scale testing (pp. 151–156). Luxemburg: Office for Official Publications of the European Communities.

    Google Scholar 

  • Kyllonen, P., & Lee, S. (2005). Assessing problem solving in context. In O. Wilhelm & R. Engle (Eds.), Handbook of understanding and measuring intelligence (pp. 11–25). Thousand Oaks: Sage.

    Google Scholar 

  • Latour, T., & Farcot, M. (2008). An open source and large-scale computer-based assessment platform: A real winner. In F. Scheuermann & A. Guimaraes Pereira (Eds.), Towards a research agenda on computer-based assessment. Challenges and needs for European educational measurement (pp. 64–67). Luxemburg: Office for Official Publications of the European Communities.

    Google Scholar 

  • Laubscher, R., Olivier, M. S., Venter, H. S., Eloff, J. H., & Rabe, D. J. (2005). The role of key loggers in computer-based assessment forensics. In Proceedings of the 2005 Annual Research Conference of the South African institute of Computer Scientists and information Technologists on IT Research in Developing Countries, September 20–22, 2005,White River. SAICSIT (Vol. 150) (pp. 123–130). South African Institute for Computer Scientists and Information Technologists.

    Google Scholar 

  • Lave, J. (1988). Cognition in practice. Cambridge: Cambridge University Press.

    Google Scholar 

  • Law, N. (2005). Assessing learning outcomes in CSCL settings. In T.-W. Chan, T. Koschmann, & D. Suthers (Eds.), Proceedings of the Computer Supported Collaborative Learning Conference (CSCL) 2005 (pp. 373–377). Taipei: Lawrence Erlbaum Associates.

    Google Scholar 

  • Law, N., Yuen, H. K., Shum, M., & Lee, Y. (2007). Phase (II) study on evaluating the effectiveness of the ‘empowering learning and teaching with information technology’ strategy (2004/2007). Final report. Hong Kong: Hong Kong Education Bureau.

    Google Scholar 

  • Law, N., Lee, Y., & Yuen, H. K. (2009). The impact of ICT in education policies on teacher practices and student outcomes in Hong Kong. In F. Scheuermann, & F. Pedro (Eds.), Assessing the effects of ICT in education – Indicators, criteria and benchmarks for international comparisons (pp. 143–164). Opoce: European Commission and OECD. http://bookshop.europa.eu/is-bin/INTERSHOP.enfinity/WFS/EU-Bookshop-Site/en_GB/-/EUR/ViewPublication-Start?PublicationKey=LB7809991

  • Lehtinen, E., Hakkarainen, K., Lipponen, L., Rahikainen, M., & Muukkonen, H. (1999). Computer supported collaborative learning: A review. Computer supported collaborative learning in primary and secondary education. A final report for the European Commission, Project, pp. 1–46.

    Google Scholar 

  • Lie, H., & Bos, B. (2008). Cascading style sheets, level 1. W3C Recommendation 17 Dec 1996, revised 11 April 2008. http://www.w3.org/TR/2008/REC-CSS1–20080411

  • Linn, M., & Hsi, S. (1999). Computers, teachers, peers: science learning partners. Mahwah: Lawrence Erlbaum Associates.

    Google Scholar 

  • Longley, P. (2005). Geographic information systems and science. New York: Wiley.

    Google Scholar 

  • Lőrincz, A. (2008). Machine situation assessment and assistance: Prototype for severely handicapped children. In A. K. Varga, J. Vásárhelyi, & L. Samuelis (Eds.). In Proceedings of Regional Conference on Embedded and Ambient Systems, Selected Papers (pp. 61–68), Budapest: John von Neumann Computer Society. Available: http://nipg.inf.elte.hu/index.php?option=com_remository&Itemid=27&func=fileinfo&id=155

  • Macdonald, J. (2003). Assessing online collaborative learning: Process and product. Computers in Education, 40(4), 377–391.

    Google Scholar 

  • Maedche, A., & Staab, S. (2001). Ontology learning for the semantic web. IEEE Intelligent Systems, 16(2), 72–79.

    Google Scholar 

  • Mahalingam, K., & Huns, M. (1997). An ontology tool for query formulation in an agent-based context. In Proceedings of the Second IFCIS International Conference on Cooperative Information Systems, pp. 170–178, June 1997, Kiawah Island, IEEE Computer Society.

    Google Scholar 

  • Markauskaite, L. (2007). Exploring the structure of trainee teachers’ ICT literacy: The main components of, and relationships between, general cognitive and technical capabilities. Education Technology Research Development, 55, 547–572.

    Google Scholar 

  • Marks, A., & Cronje, J. (2008). Randomised items in computer-based tests: Russian roulette in assessment? Journal of Educational Technology & Society, 11(4), 41–50.

    Google Scholar 

  • Martin, M., Mullis, I., & Foy, P. (2008). TIMSS 2007 international science report. Findings from IEA’s trends in international mathematics and science study at the fourth and eight grades. Chestnut Hill: IEA TIMSS & PIRLS International Study Center.

    Google Scholar 

  • Martin, R., Busana, G., & Latour, T. (2009). Vers une architecture de testing assisté par ordinateur pour l’évaluation des acquis scolaires dans les systèmes éducatifs orientés sur les résultats. In J.-G. Blais (Ed.), Évaluation des apprentissages et technologies de l’information et de la communication, Enjeux, applications et modèles de mesure (pp. 13–34). Quebec: Presses de l’Université Laval.

    Google Scholar 

  • McConnell, D. (2002). The experience of collaborative assessment in e-learning. Studies in Continuing Education, 24(1), 73–92.

    Google Scholar 

  • McDaniel, M., Hartman, N., Whetzel, D., & Grubb, W. (2007). Situational judgment tests: Response, instructions and validity: A meta-analysis. Personnel Psychology, 60, 63–91.

    Google Scholar 

  • McDonald, A. S. (2002). The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessments. Computers in Education, 39(3), 299–312.

    Google Scholar 

  • Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114, 449–458.

    Google Scholar 

  • Means, B., & Haertel, G. (2002). Technology supports for assessing science inquiry. In N. R. Council (Ed.), Technology and assessment: Thinking ahead: Proceedings from a workshop (pp. 12–25). Washington, DC: National Academy Press.

    Google Scholar 

  • Means, B., Penuel, B., & Quellmalz, E. (2000). Developing assessments for tomorrowís classrooms. Paper presented at the The Secretary’s Conference on Educational Technology 2000. Retrieved September 19, 2009, from http://tepserver.ucsd.edu/courses/tep203/fa05/b/articles/means.pdf

  • Mellar, H., Bliss, J., Boohan, R., Ogborn, J., & Tompsett, C. (Eds.). (1994). Learning with artificial worlds: Computer based modelling in the curriculum. London: The Falmer Press.

    Google Scholar 

  • Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). New York: Macmillan.

    Google Scholar 

  • Microsoft. Extensible Application Markup Language (XAML). http://msdn.microsoft.com/en-us/library/ms747122.aspx

  • Miller, J., & Mukerji, J. (Eds.) (2003). MDA guide Version 1.0.1. Object Management Group. http://www.omg.org/cgi-bin/doc?omg/03–06–01.pdf

  • Ministerial Council for Education, Employment, Training and Youth Affairs (MCEETYA). (2007). National assessment program – ICT literacy years 6 & 10 report. Carlton: Curriculum Corporation.

    Google Scholar 

  • Ministerial Council on Education, Early Childhood Development and Youth Affairs (MCEECDYA). (2008). Melbourne declaration on education goals for young Australians. Melbourne: Curriculum Corporation.

    Google Scholar 

  • Ministerial Council on Education, Employment, Training and Youth Affairs (MCEETYA). (1999). National goals for schooling in the twenty first century. Melbourne: Curriculum Corporation.

    Google Scholar 

  • Ministerial Council on Education, Employment, Training and Youth Affairs (MCEETYA). (2000). Learning in an online world: The school education action plan for the information economy. Adelaide: Education Network Australia.

    Google Scholar 

  • Ministerial Council on Education, Employment, Training and Youth Affairs (MCEETYA). (2005). Contemporary learning: Learning in an on-line world. Carlton: Curriculum Corporation.

    Google Scholar 

  • Mislevy, R. J., & Haertel, G. D. (2006). Implications of evidence-centred design for educational testing. Educational Measurement: Issues and Practice, 25(4), 6–20.

    Google Scholar 

  • Mislevy, R. J., Almond, R. G., & Lukas, J. F. (2004). A brief introduction to evidence-centred design. (CSE Report 632). Los Angeles: UCLA CRESST.

    Google Scholar 

  • Mislevy, R. J., Almond, R. G., Steinberg, L. S., & Lukas, J. F. (2006). Concepts, terminology, and basic models in evidence-centred design. In D. M. Williamson, R. J. Mislevy, & I. I. Bejar (Eds.), Automated scoring of complex tasks in computer-based testing (pp. 15–47). Mahwah: Erlbaum.

    Google Scholar 

  • Mozilla Foundation. XML user interface language. https://developer.mozilla.org/en/XUL_Reference

  • Mullis, I., Martin, M., Kennedy, A., & Foy, P. (2007). PIRLS 2006 international report: IEA’s progress in international reading literacy study in primary school on 40 countries. Chestnut Hill: Boston College.

    Google Scholar 

  • Mullis, I., Martin, M., & Foy, P. (2008). TIMSS 2007 international mathematics report. Findings from IEA’s trends in international mathematics and science study at the fourth and eight grades. Chestnut Hill: IEA TIMSS & PIRLS International Study Center.

    Google Scholar 

  • Northwest Evaluation Association. Measures of Academic Progress (MAP). http://www.nwea.org/products-services/computer-based-adaptive-assessments/map

  • OECD (2007). PISA 2006 science competencies for tomorrow’s world. Paris: OECD.

    Google Scholar 

  • OECD (2008a). Issues arising from the PISA 2009 field trial of the assessment of reading of electronic texts. Document of the 26th Meeting of the PISA Governing Board. Paris: OECD.

    Google Scholar 

  • OECD (2008b). The OECD Programme for the Assessment of Adult Competencies (PIAAC). Paris: OECD.

    Google Scholar 

  • OECD (2009). PISA CBAS analysis and results—Science performance on paper and pencil and electronic tests. Paris: OECD.

    Google Scholar 

  • OECD (2010). PISA Computer-Based Assessment of Student Skills in Science. Paris: OECD.

    Google Scholar 

  • OMG. The object Management Group. http://www.omg.org/

  • Oregon Department of Education. Oregon Assessment of Knowledge and Skills (OAKS). http://www.oaks.k12.or.us/resourcesGeneral.html

  • Patel-Schneider P., Hayes P., & Horrocks, I. (2004). OWL web ontology language semantics and abstract syntax. W3C Recommendation. http://www.w3.org/TR/2004/REC-owl-semantics-20040210/

  • Pea, R. (2002). Learning science through collaborative visualization over the Internet. Paper presented at the Nobel Symposium (NS 120), Stockholm.

    Google Scholar 

  • Pearson. PASeries. http://education.pearsonassessments.com/pai/ea/products/paseries/paseries.htm

  • Pelgrum, W. (2008). School practices and conditions for pedagogy and ICT. In N. Law, W. Pelgrum, & T. Plomp (Eds.), Pedagogy and ICT use in schools around the world: Findings from the IEA SITES 2006 study. Hong Kong: CERC and Springer.

    Google Scholar 

  • Pellegrino, J., Chudowosky, N., & Glaser, R. (2004). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.

    Google Scholar 

  • Plichart P., Jadoul R., Vandenabeele L., & Latour T. (2004). TAO, a collective distributed computer-based assessment framework built on semantic web standards. In Proceedings of the International Conference on Advances in Intelligent Systems—Theory and Application AISTA2004, In cooperation with IEEE Computer Society, November 15–18, 2004, Luxembourg.

    Google Scholar 

  • Plichart, P., Latour, T., Busana, G., & Martin, R. (2008). Computer based school system monitoring with feedback to teachers. In Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2008 (pp. 5065–5070). Chesapeake: AACE.

    Google Scholar 

  • Plomp, T., Anderson, R. E., Law, N., & Quale, A. (Eds.). (2009). Cross-national information and communication technology policy and practices in education (2nd ed.). Greenwich: Information Age Publishing Inc.

    Google Scholar 

  • Poggio, J., Glasnapp, D., Yang, X., & Poggio, A. (2004). A comparative evaluation of score results from computerized and paper & pencil mathematics testing in a large scale state assessment program. Journal of Technology Learning, and Assessment, 3(6), 30–38.

    Google Scholar 

  • Poole, J. (2001). Model-driven architecture: Vision, standards and emerging technologies. Position paper in Workshop on Metamodeling and Adaptive Object Models, ECOOP 2001, Budapest, Hungary. Available: http://www.omg.org/mda/mda_files/Model-Driven_Architecture.pdf

  • Popper, K. (1972). Objective knowledge: An evolutionary approach. New York: Oxford University Press.

    Google Scholar 

  • President’s Committee of Advisors on Science and Technology, Panel on Educational Technology. (PCAST, 1997). Report to the President on the use of technology to strengthen K-12 education in the United States. Washington, DC: Author.

    Google Scholar 

  • Quellmalz, E., & Haertel, G. (2004). Use of technology-supported tools for large-scale science assessment: Implications for assessment practice and policy at the state level: Committee on Test Design for K-12 Science Achievement. Washington, DC: Center for Education, National Research Council.

    Google Scholar 

  • Quellmalz, E., & Pellegrino, J. (2009). Technology and testing. Science, 323(5910), 75.

    Google Scholar 

  • Quellmalz, E., Timms, M., & Buckley, B. (2009). Using science simulations to support powerful formative assessments of complex science learning. Paper presented at the American Educational Research Association Annual Conference. Retrieved September 11, 2009, from http://simscientist.org/downloads/Quellmalz_Formative_Assessment.pdf

  • Raggett, D., Le Hors, A., & Jacobs, I. (1999). HTML 4.01 specification. W3C Recommendation 24 December 1999. http://www.w3.org/TR/1999/REC-html401–19991224

  • Ram, S., & Park, J. (2004). Semantic conflict resolution ontology (SCROL): An ontology for detecting and resolving data and schema-level semantic conflicts. IEEE Transactions on Knowledge and Data Engineering, 16(2), 189–202.

    Google Scholar 

  • Reich, K., & Petter, C. (2009). eInclusion, eAccessibility and design for all issues in the context of European computer-based assessment. In F. Scheuermann & J. Björnsson (Eds.), The transition to computer-based assessment. New approaches to skills assessment and implications for large-scale testing (pp. 68–73). Luxemburg: Office for Official Publications of the European Communities.

    Google Scholar 

  • Sakayauchi, M., Maruyama, H., & Watanabe, R. (2009). National policies and practices on ICT in education: Japan. In T. Plomp, R. E. Anderson, N. Law, & A. Quale (Eds.), Cross-national information and communication technology policy and practices in education (2nd ed., pp. 441–457). Greenwich: Information Age Publishing Inc.

    Google Scholar 

  • Sandene, B., Bennett, R. E., Braswell, J., & Oranje, A. (2005). Online assessment in mathematics. In B. Sandene, N. Horkay, R. E. Bennett, N. Allen, J. Braswell, B. Kaplan, & A. Oranje (Eds.), Online assessment in mathematics and writing: Reports from the NAEP technology-based assessment project (NCES 2005–457). Washington, DC: National Center for Education Statistics, US Department of Education. Retrieved July 29, 2007 from http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2005457

  • Sayle, R., & Milner-White, E. (1995). RasMol: Biomolecular graphics for all. Trends in Biochemical Sciences (TIBS), 20(9), 374.

    Google Scholar 

  • Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. In B. Smith (Ed.), Liberal education in a knowledge society (pp. 67–98). Chicago: Open Court.

    Google Scholar 

  • Scardamalia, M., & Bereiter, C. (2003). Knowledge building environments: Extending the limits of the possible in education and knowledge work. In A. DiStefano, K. E. Rudestam, & R. Silverman (Eds.), Encyclopedia of distributed learning (pp. 269–272). Thousand Oaks: Sage.

    Google Scholar 

  • Scheuermann, F., & Björnsson, J. (Eds.). (2009). New approaches to skills assessment and implications for large-scale testing. The transition to computer-based assessment. Luxembourg: Office for Official Publications of the European Communities.

    Google Scholar 

  • Scheuermann, F., & Guimarães Pereira, A. (Eds.). (2008). Towards a research agenda on computer-based assessment. Luxembourg: Office for Official Publications of the European Communities.

    Google Scholar 

  • Schmidt, D. C. (2006). Model-driven engineering. IEEE Computer, 39(2), 25–31.

    Google Scholar 

  • Schmitt, M., & Grégoire, B., (2006). Business service network design: From business model to an integrated multi-partner business transaction. Joint International Workshop on Business Service Networks and Service oriented Solutions for Cooperative Organizations (BSN-SoS4CO ‘06), June 2006, San Francisco, California, USA. Available: http://efficient.citi.tudor.lu/cms/efficient/content.nsf/0/4A938852840437F2C12573950056F7A9/$file/Schmitt06_BusinessServiceNetworkDesign_SOS4CO06.pdf

  • Schulz, W., Fraillon, J., Ainley, J., Losito, B., & Kerr, D. (2008). International civic and citizenship education study. Assessment framework. Amsterdam: IEA.

    Google Scholar 

  • Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagne, & M. Scriven (Eds.), Perspectives of curriculum evaluation (pp. 39–83). Chicago: Rand McNally.

    Google Scholar 

  • Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational Researcher, 27(2), 4.

    Google Scholar 

  • Shermis, M. D., & Burstein, J. C. (Eds.). (2003). Automated essay scoring: A cross-disciplinary perspective. Mahwah: Erlbaum.

    Google Scholar 

  • Singapore Ministry of Education (1997). Masterplan for IT in education: 1997–2002. Retrieved August 17, 2009, from http://www.moe.gov.sg/edumall/mpite/index.html

  • Singleton, C. (2001). Computer-based assessment in education. Educational and Child Psychology, 18(3), 58–74.

    Google Scholar 

  • Sowa, J. (2000). Knowledge representation logical, philosophical, and computational foundataions. Pacific-Groce: Brooks-Cole.

    Google Scholar 

  • Stevens, R. H., & Casillas, A. C. (2006). Artificial neural networks. In D. M. Williamson, R. J. Mislevy, & I. I. Bejar (Eds.), Automated scoring of complex tasks in computer-based testing (pp. 259–311). Mahwah: Erlbaum.

    Google Scholar 

  • Stevens, R. H., Lopo, A. C., & Wang, P. (1996). Artificial neural networks can distinguish novice and expert strategies during complex problem solving. Journal of the American Medical Informatics Association, 3, 131–138.

    Google Scholar 

  • Suchman, L. A. (1987). Plans and situated actions. The problem of human machine communication. Cambridge: Cambridge University Press.

    Google Scholar 

  • Tan, W., Yang, F., Tang, A., Lin, S. & Zhang, X. (2008). An e-learning system engineering ontology model on the semantic web for integration and communication. In F. Li, et al. (Eds.). ICWL 2008, LNCS 5145 (pp. 446–456). Berlin/Heidelberg: Springer.

    Google Scholar 

  • Thompson, N., & Wiess, D. (2009). Computerised and adaptive testing in educational assessment. In F. Sheuermann & J. Björnsson (Eds.), The transition to computer-based assessment. New approaches to skills assessment and implications for large-scale testing (pp. 127–133). Luxemburg: Office for Official Publications of the European Communities.

    Google Scholar 

  • Tinker, R., & Xie, Q. (2008). Applying computational science to education: The molecular workbench paradigm. Computing in Science & Engineering, 10(5), 24–27.

    Google Scholar 

  • Tissoires, B., & Conversy, S. (2008). Graphic rendering as a compilation chain. In T. Graham, & P. Palanque (Eds.), DSVIS 2008, LNCS 5136 (pp. 267–280). Berlin/Heidelberg: Springer.

    Google Scholar 

  • Torney-Purta, J., Lehmann, R., Oswald, H., & Schulz, W. (2001). Citizenship and education in twenty-eight countries: Civic knowledge and engagement at age fourteen. Delft: IEA.

    Google Scholar 

  • Turki, S., Aïdonis, Ch., Khadraoui, A., & Léonard, M. (2004). Towards ontology-driven institutional IS engineering. Open INTEROP Workshop on “Enterprise Modelling and Ontologies for Interoperability”, EMOI-INTEROP 2004; Co-located with CaiSE’04 Conference, 7–8 June 2004, Riga (Latvia).

    Google Scholar 

  • Van der Vet, P., & Mars, N. (1998). Bottom up construction of ontologies. IEEE Transactions on Knowledge and Data Engineering, 10(4), 513–526.

    Google Scholar 

  • Vargas-Vera, M., & Lytras, M. (2008). Personalized learning using ontologies and semantic web technologies. In M.D. Lytras, et al. (Eds.). WSKS 2008, LNAI 5288 (pp. 177–186). Berlin/Heidelberg: Springer.

    Google Scholar 

  • Virginia Department of Education. Standards of learning tests. http://www.doe.virginia.gov/VDOE/Assessment/home.shtml#Standards_of_Learning_Tests

  • Wainer, H. (Ed.). (2000). Computerised adaptive testing: A primer. Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Wang, S., Jiao, H., Young, M., Brooks, T., & Olson, J. (2007). A meta-analysis of testing mode effects in grade K-12 mathematics tests. Educational and Psychological Measurement, 67(2), 219–238.

    Google Scholar 

  • Wang, S., Jiao, H., Young, M., Brooks, T., & Olson, J. (2008). Comparability of computer-based and paper-and-pencil testing in K-12 reading assessments: A meta-analysis of testing mode effects. Educational and Psychological Measurement, 68(1), 5–24.

    Google Scholar 

  • Web3D Consortium (2007, 2008) ISO/IEC FDIS 19775:2008, Information technology—Computer graphics and image processing—Extensible 3D (X3D); ISO/IEC 19776:2007, Information technology—Computer graphics and image processing—Extensible 3D (X3D) encodings; ISO-IEC-19777–1-X3DLanguageBindings-ECMAScript & Java.

    Google Scholar 

  • Webb, N. (1995). Group collaboration in assessment: Multiple objectives, processes, and outcomes. Educational Evaluation and Policy Analysis, 17(2), 239.

    Google Scholar 

  • Weiss, D., & Kingsbury, G. (2004). Application of computer adaptive testing to educational problems. Journal of Educational Measurement, 21, 361–375.

    Google Scholar 

  • Williamson, D. M., Almond, R. G., Mislevy, R. J., & Levy, R. (2006a). An application of Bayesian networks in automated scoring of computerized simulation tasks. In D. M. Williamson, R. J. Mislevy, & I. I. Bejar (Eds.), Automated scoring of complex tasks in computer-based testing. Mahwah: Erlbaum.

    Google Scholar 

  • Williamson, D. M., Mislevy, R. J., & Bejar, I. I. (Eds.). (2006b). Automated scoring of complex tasks in computer-based testing. Mahwah: Erlbaum.

    Google Scholar 

  • Willighagen, E., & Howard, M. (2007). Fast and scriptable molecular graphics in web browsers without Java3D. Nature Precedings 14 June. doi:10.1038/npre.2007.50.1. http://dx.doi.org/10.1038/npre.2007.50.1

  • Wirth, J., & Funke, J. (2005). Dynamisches Problemlösen: Entwicklung und Evaluation eines neuen Messverfahrens zum Steuern komplexer Systeme. In E. Klieme, D. Leutner, & J. Wirth (Eds.), Problemlösekompetenz von Schülerinnen und Schülern (pp. 55–72). Wiesbaden: VS Verlag für Sozialwissenschaften.

    Google Scholar 

  • Wirth, J., & Klieme, E. (2003). Computer-based assessment of problem solving competence. Assessment in Education: Principles, Policy & Practice, 10(3), 329–345.

    Google Scholar 

  • Xi, X., Higgins, D., Zechner, K., Williamson, D. M. (2008). Automated scoring of spontaneous speech using SpeechRater v1.0 (RR-08–62). Princeton: Educational Testing Service.

    Google Scholar 

  • Zhang, Y., Powers, D. E., Wright, W., & Morgan, R. (2003). Applying the Online Scoring Network (OSN) to Advanced Placement program (AP) tests (RM-03–12). Princeton: Educational Testing Service. Retrieved August 9, 2009 from http://www.ets.org/research/researcher/RR-03–12.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benő Csapó .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media B.V.

About this chapter

Cite this chapter

Csapó, B., Ainley, J., Bennett, R.E., Latour, T., Law, N. (2012). Technological Issues for Computer-Based Assessment. In: Griffin, P., McGaw, B., Care, E. (eds) Assessment and Teaching of 21st Century Skills. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-2324-5_4

Download citation

Publish with us

Policies and ethics