Journal of Science Education and Technology

, Volume 21, Issue 1, pp 183–196 | Cite as

Transforming Biology Assessment with Machine Learning: Automated Scoring of Written Evolutionary Explanations

  • Ross H. Nehm
  • Minsu Ha
  • Elijah Mayfield


This study explored the use of machine learning to automatically evaluate the accuracy of students’ written explanations of evolutionary change. Performance of the Summarization Integrated Development Environment (SIDE) program was compared to human expert scoring using a corpus of 2,260 evolutionary explanations written by 565 undergraduate students in response to two different evolution instruments (the EGALT-F and EGALT-P) that contained prompts that differed in various surface features (such as species and traits). We tested human-SIDE scoring correspondence under a series of different training and testing conditions, using Kappa inter-rater agreement values of greater than 0.80 as a performance benchmark. In addition, we examined the effects of response length on scoring success; that is, whether SIDE scoring models functioned with comparable success on short and long responses. We found that SIDE performance was most effective when scoring models were built and tested at the individual item level and that performance degraded when suites of items or entire instruments were used to build and test scoring models. Overall, SIDE was found to be a powerful and cost-effective tool for assessing student knowledge and performance in a complex science domain.


Machine learning SIDE Text analysis Assessment Computers Evolution Explanation 



We thank the faculty and participants of the 2010 PSLC (NSF Pittsburgh Science of Learning Center) summer school for financial and intellectual support; Prof. Carolyn Penstein Rosé for introducing us to the SIDE program; NSF REESE grant 0909999 for financial support.


  1. Alberts B (2010) Reframing science standards. Science 329(5991):491CrossRefGoogle Scholar
  2. Arora S, Nyberg E (2009) Interactive annotation learning with indirect feature voting. In: Paper in the proceedings of student research symposium at NAACL-HLT 2009, Boulder, Colorado, USA. Accessed online at:
  3. Bejar II (1991) A methodology for scoring open-ended architectural design problems. J Appl Psychol 76(4):522–532CrossRefGoogle Scholar
  4. Bishop B, Anderson C (1990) Student conceptions of natural selection and its role in evolution. J Res Sci Teach 27:415–427CrossRefGoogle Scholar
  5. Burstein J (2003) The e-rater scoring engine: automated essay scoring with natural language processing. In: Shermis MD, Burstein J (eds) Automated essay scoring: a cross-disciplinary perspective. Lawrence Erlbaum Associates, Inc, Mahwah, pp 113–122Google Scholar
  6. Chung GKWK, Baker EL (2003) Issues in the reliability and validity of automated scoring of constructed responses. In: Shermis MD, Burstein J (eds) Automated essay scoring: a cross-disciplinary perspective. Erlbaum, Mahwah, pp 23–40Google Scholar
  7. Clough EE, Driver R (1986) A study of consistency in the use of students’ conceptual frameworks across different task contexts. Sci Educ 70:473–496CrossRefGoogle Scholar
  8. Demastes SS, Good RG, Peebles P (1995) Students’ conceptual ecologies and the process of conceptual change in evolution. Sci Educ 79(6):637–666CrossRefGoogle Scholar
  9. Donmez P, Rosé C, Stegmann K, Weinberger A, Fischer F (2005) Supporting CSCL with automatic corpus analysis technology. In: Paper in proceedings of the international conference on computer support for collaborative learning (CSCL), Taipei, TaiwanGoogle Scholar
  10. Endler JA (1992) Natural selection: current usages. In: Keller EF, Lloyd EA (eds) Keywords in evolutionary biology. Harvard, Cambridge, pp 220–224Google Scholar
  11. Galt K (2008) SPSS text analysis for surveys 2.1 and qualitative and mixed methods analysis. J Mixed Meth Res 2(3):284–286CrossRefGoogle Scholar
  12. Gitomer DH, Duschl RA (2007) Establishing multilevel coherence in assessment. In: Moss PA (ed) Evidence and decision making. The 106th yearbook of the National Society for the Study of Education, Part I. National Society for the Study of Education, Chicago, pp 288–320Google Scholar
  13. Krippendorff K (1980) Content analysis: an introduction to its methodology, 1st edn. Sage Publications, Thousand OaksGoogle Scholar
  14. Krippendorff K (2004) Content analysis: an introduction to its methodology, 2nd edn. Sage Publications, Thousand Oaks, LondonGoogle Scholar
  15. Kumar R, Rosé C, Wang YC, Joshi M, Robinson A (2007) Tutorial dialogue as adaptive collaborative learning support. In: Paper in proceedings of the international conference on artificial intelligence in education, Los Angeles, USAGoogle Scholar
  16. Landauer TK, Laham D, Foltz PW (2001) The intelligent essay assessor: putting knowledge to the test. In: Paper presented at the Association of Test Publishers Computer-Based Testing: Emerging Technologies and Opportunities for Diverse Applications conference, Tucson, AZGoogle Scholar
  17. Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33:159–174CrossRefGoogle Scholar
  18. Lewontin R (1978) Adaptation. Sci Am 239:212–228CrossRefGoogle Scholar
  19. Liu OL, Lee HS, Hofstetter C, Linn MC (2008) Assessing knowledge integration in science: construct, measures, and evidence. Educ Assess 13(1):33–55CrossRefGoogle Scholar
  20. Markoff J (2011) Computer wins on ‘jeopardy!’: trivial, it’s not. New York Times, 16 FebGoogle Scholar
  21. Mayfield E, Rosé C (2010) An interactive tool for supporting error analysis for text mining. In: Paper in proceedings of the demonstration session at the international conference of the North American Association for Computational Linguistics (NAACL), Los Angeles, USAGoogle Scholar
  22. McLaren B, Scheuer O, de Laat M, Hever R, de Groot R, Rosé C (2007) Using machine learning techniques to analyze and support mediation of student e-discussions. In: Paper in proceedings of the international conference on artificial intelligence in education, Los Angeles, USAGoogle Scholar
  23. National Research Council (2001) Knowing what students know: the science and design of educational assessment. National Academy Press, Washington, D.C.Google Scholar
  24. National Research Council (2007) Taking science to school: learning and teaching science in grades K-8. National Academy Press, Washington, D.C.Google Scholar
  25. National Research Council (2008) Rising above the gathering storm: energizing and employing America for a brighter economic future. National Academy Press, Washington, D.C.Google Scholar
  26. Nehm RH (2010) Understanding undergraduates’ problem solving processes. J Biol Microbiol Educ 11(2):119–122Google Scholar
  27. Nehm RH, Ha M (2011) Item feature effects in evolution assessment. J Res Sci Teach 48(3):237–256CrossRefGoogle Scholar
  28. Nehm RH, Haertig H (2011) Human vs. computer diagnosis of students’ natural selection knowledge: testing the efficacy of text analytic software. J Sci Educ Technol. doi: 10.1007/s10956-011-9282-7
  29. Nehm RH, Reilly L (2007) Biology majors’ knowledge and misconceptions of natural selection. Bioscience 57(3):263–272CrossRefGoogle Scholar
  30. Nehm RH, Schonfeld IS (2008) Measuring knowledge of natural selection: a comparison of the CINS, an open-response instrument, and an oral interview. J Res Sci Teach 45(10):1131–1160CrossRefGoogle Scholar
  31. Nehm RH, Schonfeld IS (2010) The future of natural selection knowledge measurement: a reply to Anderson et al. J Res Sci Teach 47(3):358–362Google Scholar
  32. Nehm RH, Ha M, Rector M, Opfer J, Perrin L, Ridgway J, Mollohan K (2010) Scoring guide for the open response instrument (ORI) and evolutionary gain and loss test (EGALT). Technical Report of National Science Foundation REESE Project 0909999. Accessed online 10 Jan 2011 at:
  33. Page EB (1966) The imminence of grading essays by computers. Phi Delta Kappan 47:238–243Google Scholar
  34. Patterson C (1978) Evolution. Cornell University Press, IthacaGoogle Scholar
  35. Pigliucci M, Kaplan J (2006) Making sense of evolution: the conceptual foundations of evolutionary biology. University of Chicago Press, ChicagoGoogle Scholar
  36. Rose C, Donmez P, Gweon G, Knight A, Junker B, Cohen W, Koedinger K, Heffernan N (2005) Automatic and semi-automatic skill coding with a view towards supporting on-line assessment. In: Paper in proceedings of the international conference on artificial intelligence in education, Amsterdam, The NetherlandsGoogle Scholar
  37. Rose CP, Wang YC, Cui Y, Arguello J, Stegmann K, Weinberger A, Fischer F (2008) Analyzing collaborative learning processes automatically: exploiting the advances of computational linguistics in computer-supported collaborative learning. Int J Comput Support Collab Learn 3(3):237–271CrossRefGoogle Scholar
  38. Shermis MD, Burstein J (2003) Automated essay scoring: a cross-disciplinary perspective. Lawrence Erlbaum Associates, Inc, MahwahGoogle Scholar
  39. Sukkarieh J, Bolge E (2008) Leveraging c-rater’s automated scoring capability for providing instructional feedback for short constructed responses. In: Woolf BP, Aimeur E, Nkambou R, Lajoie S (eds) Lecture notes in computer science: vol. 5091. Proceedings of the 9th international conference on intelligent tutoring systems, ITS 2008, Montreal, Canada, June 23–27, 2008. Springer, New York, pp 779–783Google Scholar
  40. The Conference Board, Corporate Voices for Working Families, the Partnership for 21st Century Skills, and the Society for Human Resource Management (2007) Are they really ready to work? Employers’ perspectives on the basic knowledge and applied skills of new entrants to the 21st century workforce. Accessed online 22 Mar 2011 at:
  41. Wagner T (2008) The global achievement gap. Basic Books, New YorkGoogle Scholar
  42. Witten IH, Frank E (2005) Data mining, 2nd edn. Elsevier, AmsterdamGoogle Scholar
  43. Yang Y, Buckendahl CW, Juszkiewicz PJ, Bhola DS (2002) A review of strategies for validating computer automated scoring. Appl Meas Educ 15(4):391–412CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.School of Teaching and LearningThe Ohio State UniversityColumbusUSA
  2. 2.Language Technologies InstituteCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations