Measuring What Matters in a Digital Age: Technology and the Design of Assessments for Multisource Comprehension

  • James W. PellegrinoEmail author


This chapter provides a discussion of issues associated with research and development to effect a more productive connection between technology and the design and deployment of assessments that can measure what matters and support learning in a digital world. Assessment is first discussed as a process of reasoning from evidence, emphasizing its necessary connections to theory and research on cognition and learning. An evidence-centered design process, which allows one to go from theory and research on cognition to actual assessment development, is described and then subsequently illustrated. Consideration is also given to the affordances of technology for expanding the scope of what we assess and how, and ways in which the information derived from a formative assessment process can then be used to support the processes of teaching and learning. To illustrate these ideas, this chapter then focuses on explicating a cognitive model of multisource comprehension and then using that model to design and deploy technology-based assessments of components of multisource comprehension. The process of applying evidence-centered design is discussed for components such as sourcing and analysis and synthesis in the content of multiple digital text sources. Illustrations are provided of technology-based tasks for assessing aspects of sourcing. This chapter concludes with a discussion of the opportunities that currently exist in a digital world to make assessment an integral part of learning environments and some of technology’s affordances to make such environments more productive and effective for all learners.


Domain Model Formative Assessment Task Model Assessment Task Student Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. Azevedo, R., & Cromley, J. G. (2004). Does training on self-regulated learning facilitate students’ learning with hypermedia. Journal of Educational Psychology, 96, 523–535.CrossRefGoogle Scholar
  2. Bennett, R. E. (2008). Technology for large-scale assessment (ETS Report No. RM-08-10). Princeton, NJ: Educational Testing Service.Google Scholar
  3. Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Assessment for learning – Putting it into practice. Buckingham: Open University Press.Google Scholar
  4. Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. London: King’s College.Google Scholar
  5. Braasch, J. L. G., Lawless, K. A., Goldman, S. R., Manning, F., Gomez, K. W., & MacLeod, S. (2009). Evaluating search results: An empirical analysis of middle school students’ use of source attributes to select useful sources. Journal of Educational Computing Research, 41(1), 63–82.CrossRefGoogle Scholar
  6. Bransford, J. D., Brown, A. L., Cocking, R. R., Donovan, M. S., & Pellegrino, J. W. (Eds.). (2000). How people learn: Brain, mind, experience, and school (Expandedth ed.). Washington, DC: National Academy Press.Google Scholar
  7. Bråten, I., & Strømsø, H. (2006). Effects of personal epistemology on the understanding of multiple texts. Reading Psychology, 27, 457–484.CrossRefGoogle Scholar
  8. Bråten, I., & Strømsø, H. (2010). Effects of task instruction and personal epistemology on the understanding of multiple texts about climate change. Discourse Processes: A Multidisciplinary Journal, 47(1), 1–31.CrossRefGoogle Scholar
  9. Brem, S. K., Russell, J., & Weems, L. (2001). Science on the Web: Student evaluations of scientific arguments. Discourse Processes, 32, 191–213.Google Scholar
  10. Britt, M. A., & Aglinskas, C. (2002). Improving student’s ability to use source information. Cognition & Instruction, 20(40), 485–522.CrossRefGoogle Scholar
  11. Brown, J., Hinze, S., & Pellegrino, J. W. (2008). Technology and formative assessment. In T. Good (Ed.), 21st Century education. Vol 2. Technology (pp. 245–255). Thousand Oaks, CA: Sage.Google Scholar
  12. Coiro, J. (2009). Rethinking online reading assessment. Educational Leadership, 66(6), 59–63.Google Scholar
  13. Coiro, J., & Dobler, E. (2007). Exploring the online reading comprehension strategies used by sixth-grade skilled readers to search for and locate information on the Internet. Reading Research Quarterly, 42(2), 214–257.CrossRefGoogle Scholar
  14. Council of Chief State School Officers (CCSSO) and the National Governors Association (NGA). (2010). The Common Core State Standards for English Language Arts & Literacy in history/social studies, science, and technical subjects. Retrieved July 16, 2010, from
  15. Foltz, P. W., Britt, M. A., & Perfetti, C. A. (1996). Reasoning from multiple texts: An automatic analysis of readers’ situation models. In G. W. Cottrell (Ed.), Proceedings of the 18th annual Cognitive Science Society (pp. 110–115). Mahwah, NJ: Erlbaum.Google Scholar
  16. Gil, L., Braten, I., Vidal-Abarca, E., & Strømsø, H. I. (2010). Understanding and integrating multiple science texts: Summary tasks are sometimes better than argument tasks. Reading Psychology, 31(1), 30–68.CrossRefGoogle Scholar
  17. Goldman, S. R. (2004). Cognitive aspects of constructing meaning through and across multiple texts. In N. Shuart-Ferris & D. M. Bloome (Eds.), Uses of intertextuality in classroom and educational research (pp. 313–347). Greenwich, CT: Information Age.Google Scholar
  18. Goldman, S. R., Lawless, K. A., Gomez, K. W., Braasch, J. L. G., MacLeod, S., & Manning, F. (2010). Literacy in the digital world: Comprehending and learning from multiple sources. In M. G. McKeown & L. Kucan (Eds.), Bringing reading researchers to life (pp. 257–284). New York: Guilford.Google Scholar
  19. Goldman, S. R., Lawless, K. A., & Manning, F. (in press). Research and development of multiple source comprehension assessment. In M. A. Britt, S. R. Goldman, & J. F. Rouet (Eds.), Reading: From words to multiple texts. New York: Routledge.Google Scholar
  20. Goldman, S. R., Lawless, K. A., Pellegrino, J. W., Braasch, J. L. G., Manning, F. H., & Gomez, K. (2012). A technology for assessing multiple source comprehension: An essential skill of the 21st century. In M. Mayrath, J. Clarke-Midura, & D. H. Robinson (Eds.), Technology-based assessments for 21st century skills: Theoretical and practical implications from modern research (pp. 171–207). Charlotte, NC: Information Age.Google Scholar
  21. Henry, L. A. (2006). SEARCHing for an answer: The critical role of new literacies while reading on the Internet. The Reading Teacher, 59, 614–627.CrossRefGoogle Scholar
  22. Ito, M., Horst, H., Bittanti, M., Boyd, D., Stephenson, B., Lange, P., et al. (2008). Living and learning with new media: Summary of findings from the digital youth project. The John D. and Catherine T. MacArthur Foundation Reports on Digital Media and Learning. Cambridge: MIT Press.Google Scholar
  23. Kim, H. J. J., & Millis, K. (2006). The influence of sourcing and relatedness on event integration. Discourse Processes, 41(1), 51–65.CrossRefGoogle Scholar
  24. Lawless, K. A., Goldman, S., R., Gomez, K., Manning, F., & Braasch, J. (still in press). Assessing multiple source comprehension through Evidence Centered Design. In J. P. Sabatini & E. R. Albro (Eds.), Assessing reading in the 21st century: Aligning and applying advances in the reading and measurement sciences. Lanham, MD: Rowman & Littlefield.Google Scholar
  25. Lawless, K. A., Schrader, P. G., & Mayall, H. J. (2007). Acquisition of information online: Knowledge, navigation and learning outcomes. Journal of Literacy Research, 39 (3), 289–306.Google Scholar
  26. Lawless, K. A., Schrader, P. G., & Mayall, H. J. (2007). Acquisition of information online: Knowledge, navigation and learning outcomes. Journal of Literacy Research, 39(3), 289–306.Google Scholar
  27. Lee, C. D., & Spratley, A. (2010). Reading in the disciplines: The challenges of adolescent literacy. New York, NY: Carnegie Corporation of New York.Google Scholar
  28. Lenhart, A., Rainie, L., & Lewis, O. (2001). Teenage life online: The rise of the Instant-Message generation and the Internet’s impact on friendships and family relationships. Pew Internet & American Life Project. Retrieved July 10, 2010, from
  29. Mateos, M., & Solé, I. (2009). Synthesising information from various texts: A study of procedures and products at different educational levels. European Journal of Psychology of Education – EJPE (Instituto Superior De Psicologia Aplicada), 24(4), 435–451.Google Scholar
  30. Mislevy, R. J., Steinberg, L., & Almond, R (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspective, 1, 3–67.Google Scholar
  31. Mislevy, R. J., & Haertel, G. (2006). Implications of evidence-centered design for educational assessment. Educational Measurement: Issues and Practice, 25(4), 6–20.CrossRefGoogle Scholar
  32. Mislevy, R. J., & Haertel, G. (2006). Implications of evidence-centered design for educational assessment. Educational Measurement: Issues and Practice, 25 (4), 6–20.CrossRefGoogle Scholar
  33. Mislevy, R. J., & Riconscente, M. M. (2006). Evidence-centered assessment design: Layers, concepts, and terminology. In S. Downing & T. Haladyna (Eds.), Handbook of test development (pp. 61–90). Mahwah, NJ: Erlbaum.Google Scholar
  34. Moje, E. B., & O’Brien, D. G. (Eds.). (2001). Constructions of literacy: Studies of teaching and learning in and out of secondary classrooms. Mahwah, NJ: Erlbaum.Google Scholar
  35. National Center on Education and the Economy (NCEE). (2007). Tough choices or tough times. The report of the New Commission on the Skills of the American Workforce. Washington DC: National center on Education and the Economy.Google Scholar
  36. National Research Council (NRC). (2007). Rising above the gathering storm: Energizing and employing America for a brighter economic future. Committee on Prospering in the Global Economy of the 21st Century: An agenda for American science and technology; Committee on Science, Engineering, and Public Policy. Washington, DC: National Academies Press.Google Scholar
  37. National Research Council (NRC). (2012). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas. Committee on a Conceptual Framework for New K-12 Science Education Standards, Board on Science Education. Washington, DC: National Academies Press.Google Scholar
  38. New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66, 60–92.Google Scholar
  39. Organization for Economic Co-operation and Development (OECD). (2002). Reading for change: Performance and engagement across countries. Paris: OECD.Google Scholar
  40. Organization for Economic Cooperation and Development (OECD). (2004). Learning for tomorrow’s world: First results from PISA 2003. Paris: Author.Google Scholar
  41. Organization for Economic Co-operation and Development (OECD). (2006). Assessing scientific, reading and mathematical literacy: A framework for PISA 2006. Research. Paris: Author.Google Scholar
  42. Pellegrino, J., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.Google Scholar
  43. Pellegrino, J. W., & Quellmalz, E. S. (2010). Perspectives on the integration of technology and assessment. Journal of Research on Technology and Education, 43(2), 119–134.Google Scholar
  44. Perez, S. (2009). “Who’s online and what are they doing”. Pew Internet and American Life Project. Retrieved July 10, 2010, from
  45. Foltz, P. W., Britt, M. A., & Perfetti, C. A. (1996). Reasoning from multiple texts: An automatic analysis of readers’ situation models. In G. W. Cottrell (Ed.), Proceedings of the 18th annual Cognitive Science Society (pp. 110–115). Mahwah, NJ: Erlbaum.Google Scholar
  46. Phillips, L. M., & Norris, S. P. (1999). Interpreting popular reports of science: What happens when the reader’s world meets the world on paper? International Journal of Science Education, 21, 317–327.CrossRefGoogle Scholar
  47. Quellmalz, E., & Pellegrino, J. W. (2009). Technology and testing. Science, 323, 75–79.CrossRefGoogle Scholar
  48. Recker, M., Walker, A., & Lawless, K. (2003). What do you recommend? Implementation and analyses of collaborative information filtering of web resources for education. Instructional Science, 31(4–5), 299–316.CrossRefGoogle Scholar
  49. Rouet, J. F., Britt, M. A., Mason, R. A., & Perfetti, C. A. (1996). Using multiple sources of evidence to reason about history. Journal of Educational Psychology, 88, 478–493.CrossRefGoogle Scholar
  50. Salmerón, L., Baccino, T., Cañas, J. J., Madrid, R. I., & Fajardo, I. (2009). Do graphical overviews facilitate or hinder comprehension in hypertext? Computers & Education, 53, 1308–1319.CrossRefGoogle Scholar
  51. Salmerón, L., Gil, L., Bråten, I., & Strømsø, H. (2010). Comprehension effects of signalling relationships between documents in search engines. Computers in Human Behavior, 26(3), 419–426.CrossRefGoogle Scholar
  52. Sawyer, R. K. (Ed.). (2006). Cambridge handbook of the learning sciences. New York: Cambridge University Press.Google Scholar
  53. Schum, D. (1987). Evidence and inference for the intelligence analyst. Lantham, MD: University of America Press.Google Scholar
  54. Shanahan, T., & Shanahan, C. (2008). Teaching disciplinary literacy to adolescents: Rethinking content-area literacy. Harvard Educational Review, 78(1), 40–59.Google Scholar
  55. Stadtler, M., & Bromme, R. (2007). Dealing with multiple documents on the WWW: The role of metacognition in the formation of documents models. Computer-Supported Collaborative Learning, 2, 191–210.CrossRefGoogle Scholar
  56. Stiggins, R. (2005). From formative assessment to assessment for learning: A path to success in standards-based schools. Phi Delta Kappan, 87(3), 324–328.Google Scholar
  57. Strømsø, H. I., Bråten, I., & Britt, M. (2010). Reading multiple texts about climate change: The relationship between memory for sources and text comprehension. Learning and Instruction, 20(3), 192–204.CrossRefGoogle Scholar
  58. U.S. Department of Education. (2010). Transforming American education: Learning powered by technology. Washington, DC: U.S. Department of Education.
  59. VanSledright, B. A. (2002). Confronting history’s interpretive paradox while teaching fifth graders to investigate the past. American Educational Research Journal, 39, 1089–1115.CrossRefGoogle Scholar
  60. Vendlinski, T., & Stevens, R. (2002). Assessing student problem-solving skills with complex computer-based tasks. Journal of Technology, Learning, and Assessment, 1(3).
  61. Wade, S. E., & Moje, E. B. (2000). The role of text in classroom learning. In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. III, pp. 609–627). Mahwah, NJ: Erlbaum.Google Scholar
  62. Wallace, R. M., Kupperman, J., & Krajcik, J. (2000). Science on the Web: Students online in a sixth-grade classroom. Journal of the Learning Sciences, 9, 75–105.CrossRefGoogle Scholar
  63. Wiley, J., Goldman, S. R., Graesser, A. C., Sanchez, C. A., Ash, I. K., & Hemmerich, J. A. (2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks. American Educational Research Journal, 46(4), 1060–1106.CrossRefGoogle Scholar
  64. Wiliam, D. (2007). Keeping learning on track. In F. K. Lester Jr. (Ed.), Second handbook of mathematics teaching and learning (pp. 1051–1098). Greenwich, CT: Information Age.Google Scholar
  65. Williams, K., & Gomez, L. (2002). Presumptive literacies in technology-integrated science curriculum. In G. Stahl (Ed.), Computer support for collaborative learning: Foundations for a CSCL community (pp. 599–600). Mahwah, NJ: Erlbaum.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.University of IllinoisChicagoUSA

Personalised recommendations