Skip to main content

Assessing Science Inquiry Skills in an Immersive, Conversation-Based Scenario

  • Chapter
  • First Online:
Big Data and Learning Analytics in Higher Education

Abstract

Innovative, interactive tasks that include conversations among humans and virtual (pedagogical) agents can be used to assess relevant cognitive skills (e.g., scientific inquiry skills). These new assessment systems aid the collection of additional information (e.g., timing data, information about conversation path sequences, and amount of help used) that provide the context for assessment and can inform assessment claims in these specific environments. In order to assess science skills, we have implemented and evaluated a game-like assessment with embedded conversations called the Volcano Scenario. This chapter describes the Volcano Scenario and highlights the techniques used to collect and analyze the data generated by the system. A hybrid approach to analyzing data from interactive, assessment environments that makes use of traditional psychometric analysis and several big data-related processes is described and illustrated through the analyses of data from 500 participants who have at least a year of college experience.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Adamson, D., Dyke, G., Jang, H. J., & Rosé, C. P. (2014). Towards an agile approach to adapting dynamic collaboration support to student needs. International Journal of Artificial Intelligence in Education, 24(1), 91–121.

    Article  Google Scholar 

  • Assunção, M., Calheiros, R., Bianchi, S., Netto, M., & Buyya, R. (2014). Big data computing and clouds: Trends and future directions. Journal of Parallel and Distributed Computing, 75(13), 156–175.

    Google Scholar 

  • Baker, R., & Yacef, K. (2009). The state of educational data mining in 2009: A review and future visions. Journal of Educational Data Mining, 1, 3–17.

    Google Scholar 

  • Bennett, R. E., Persky, H., Weiss, A. R., & Jenkins, F. (2007). Problem solving in technology-rich environments: A report from the NAEP technology based assessment project (NCES 2007–466). Washington, DC: National Center for Education Statistics, U.S. Department of Education

    Google Scholar 

  • Clarke-Midura, J., Code, J., Dede, C., Mayrath, M., & Zap, N. (2011). Thinking outside the bubble: Virtual performance assessments for measuring complex learning. In M. C. Mayrath, J. Clarke-Midura, & D. Robinson (Eds.), Technology-based assessments for 21st century skills: Theoretical and practical implications from modern research (pp. 125–147). Charlotte, NC: Information Age.

    Google Scholar 

  • R Core Team (2013). R: A language and environment for statistical computing. R Foundation for statistical Computing, Vienna, Austria. Retrieved October 5, 2014, from http://www.R-project.org/

  • DiCerbo, K., & Behrens, J. (2012). From technology-enhanced assessment to assessment-enhanced technology. Paper presented at the annual meeting of the National Council on Measurement in Education (NCME), Vancouver, BC. Canada, 12–16 April 2012.

    Google Scholar 

  • Gotwals, A. W., & Songer, N. B. (2006). Measuring students’ scientific content and inquiry reasoning. In S. Barab, K. Hay, & D. Hickey (Eds.), Proceedings of the 7th international conference of the learning sciences (pp. 196–202). Mahwah, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Graesser, A. C., Lu, S. L., Jackson, G., Mitchell, H., Ventura, M., Olney, A., et al. (2004). AutoTutor: A tutor with dialogue in natural language. Behavioral Research Methods, Instruments & Computers, 36, 180–193.

    Article  Google Scholar 

  • Graesser, A. C., Person, N. K., & Harter, D. (2001). The tutoring research group: Teaching tactics and dialogue in AutoTutor. International Journal of Artificial Intelligence in Education, 12, 257–279.

    Google Scholar 

  • Hao, J., Liu, L., von Davier, A., & Kyllonen, P. (2015). Assessing collaborative problem solving with simulation based task. In Proceedings of the 11th international conference on computer supported collaborative learning, Gothenburg, Sweden, 7–11 June.

    Google Scholar 

  • Hao, J., Smith, L., Mislevy, R., von Davier, A., & Bauer, M. (2016). Taming log files from game and simulation based assessments: Data models and data analysis tools. doi:10.1002/ets2.12096

  • Kraut, R., Olson, J., Banaji, M., Bruckman, A., Cohen, J., & Couper, M. (2004). Psychological research online: Report of board of scientific affairs’ advisory group on the conduct of research on the internet. American Psychologist, 59, 105–117.

    Article  Google Scholar 

  • Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Liu, L., Hao, J., von Davier, A., Kyllonen, P., & Zapata-Rivera, D. (2016). A tough nut to crack: Measuring collaborative problem solving. In Y. Rosen, S. Ferrara, & M. Mosharraf (Eds.), Handbook of research on computational tools for real-world skill development. Hershey, PA: IGI-Global.

    Google Scholar 

  • Liu, L., Rogat, A., & Bertling, M. (2013). A CBALâ„¢ science model of cognition: Developing a competency model and learning progressions to support assessment development (ETS Research Report Series. 2:1–54). Princeton, NJ: Educational Testing Service.

    Google Scholar 

  • Millis, K., Forsyth, C., Butler, H., Wallace, P., Graesser, A. C., & Halpern, D. (2011). Operation ARIES! a serious game for teaching scientific inquiry. In J. Lakhmi & M. M. Oikonomou (Eds.), Serious games and edutainment applications (pp. 169–196). London: Springer.

    Chapter  Google Scholar 

  • Mislevy, R., Oranje, A., Bauer, M., von Davier, A., Hao, J., Corrigan, S., et al. (2014). Psychometric considerations in game-based assessment. Retrieved October 9, 2014, from http://www.instituteofplay.org/wp-content/uploads/2014/02/GlassLab_GBA1_WhitePaperFull.pdf.

  • Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives , 1, 3–62.

    Google Scholar 

  • Pear, J. J., & Crone-Todd, D. E. (2002). A social constructivist approach to computer-mediated instruction. Computers & Education, 38, 221–231.

    Article  Google Scholar 

  • Quellmalz, E. S., Timms, M. J., Buckley, B. C., Davenport, J., Loveland, M., & Silberglitt, M. D. (2011). 21st century dynamic assessment. In M. C. Mayrath, J. Clarke-Midura, & D. Robinson (Eds.), Technology-based assessments for 21st century skills: Theoretical and practical implications from modern research (pp. 55–90). Charlotte, NC: Information Age.

    Google Scholar 

  • Rundgren, C. J., Rundgren, S. N. C., Tseng, Y. H., Lin, P. L., & Chang, C. Y. (2012). Are you SLiM? Developing an instrument for civic scientific literacy measurement (SLiM) based on media coverage. Public Understanding of Science, 21(6), 759–773.

    Article  Google Scholar 

  • Shute, V. J., Ventura, M., Bauer, M. I., & Zapata-Rivera, D. (2009). Melding the power of serious games and embedded assessment to monitor and foster learning: Flow and grow. In U. Ritterfeld, M. J. Cody, & P. Vorderer (Eds.), Serious games: Mechanisms and effects (pp. 295–321). Philadelphia, PA: Routledge.

    Google Scholar 

  • Vygotsky, L. (1978). Mind in society. London: Harvard University Press.

    Google Scholar 

  • White, T. (2012). Hadoop: The definitive guide (3rd ed.). Sebastopol, CA: O’Reilly Media.

    Google Scholar 

  • Zapata-Rivera, D. (2013). Exploring the use of trialogues in assessment. Paper presented at the Cognition and Assessment SIG Symposium. Annual meeting of the American Educational Research Association (AERA), San Francisco, CA, April 27–May 1.

    Google Scholar 

  • Zapata-Rivera, D., Jackson, T., Liu, L., Bertling, M., Vezzu, M., & Katz, I. R. (2014). Science inquiry skills using trialogues. In S. Trausan-Matu, K. Boyer, M. Crosby, & K. Panourgia (Eds.), Proceedings of the 12th International Conference on Intelligence Tutoring Systems. Honolulu, HI, June 2014: Vol. 8474: Lecture notes in computer science (pp. 625–626). Switzerland: Springer International.

    Google Scholar 

  • Zimmerman, C. (2000). The development of scientific reasoning skills. Developmental Review, 20, 99–149.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Diego Zapata-Rivera .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Zapata-Rivera, D., Liu, L., Chen, L., Hao, J., von Davier, A.A. (2017). Assessing Science Inquiry Skills in an Immersive, Conversation-Based Scenario. In: Kei Daniel, B. (eds) Big Data and Learning Analytics in Higher Education. Springer, Cham. https://doi.org/10.1007/978-3-319-06520-5_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-06520-5_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-06519-9

  • Online ISBN: 978-3-319-06520-5

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics