Advertisement

Posing Comparative Statistical Investigative Questions

  • Pip Arnold
  • Maxine Pfannkuch
Chapter
Part of the ICME-13 Monographs book series (ICME13Mo)

Abstract

A “good” statistical investigative question is one that allows rich exploration of the data in hand, discovery, and thinking statistically. Two outcomes from four research cycles over a period of five years were: The development of criteria for what makes a good statistical investigative question and a detailed two-way hierarchical classification framework for comparative statistical investigative questions that are posed. With a focus on the last research cycle, responses from pre- and post-tests are explored, and the level of comparative statistical investigative questions that students posed is discussed.

Keywords

Comparisons SOLO taxonomy Statistical enquiry cycle Statistical investigative questions 

References

  1. Arnold, P. (2008, July). What about the P in the PPDAC cycle? An initial look at posing questions for statistical investigation. Paper presented at the 11th International Congress on Mathematical Education (ICME-11), Monterrey, Mexico. http://tsg.icme11.org/document/get/481.
  2. Arnold, P. (2013). Statistical investigative questions: An enquiry into posing and answering investigative questions from existing data (Doctoral thesis). Retrieved from https://researchspace.auckland.ac.nz/handle/2292/21305.
  3. Bakker, A. (2004). Design research in statistics education: On symbolizing and computer tools. Utrecht, The Netherlands: Freudenthal Institute.Google Scholar
  4. Bakker, A., & van Eerde, D. (2015). An introduction to design-based research with an example from statistics education. In A. Bikner-Ahsbahs C. Knipping & N. Presmeg (Eds.), Approaches to qualitative research in mathematics education (pp. 429–466). Dordrecht, The Netherlands: Springer.  https://doi.org/10.1007/978-94-017-9181-6_16.Google Scholar
  5. Biehler, R. (1997). Students’ difficulties in practicing computer-supported data analysis: Some hypothetical generalizations from results of two exploratory studies. In J. Garfield & G. Burrill (Eds.), Research on the role of technology in teaching and learning statistics. Proceedings of the International Association for Statistical Education Round Table Conference, July, 1996. Granada, Spain (pp. 169–190). Voorburg, The Netherlands: International Statistical Institute.Google Scholar
  6. Biggs, J., & Collis, K. (1982). Evaluating the quality of learning: The SOLO taxonomy. New York, NY: Academic Press.Google Scholar
  7. Brown, A. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. The Journal of the Learning Sciences, 2(2), 141–178.  https://doi.org/10.1207/s15327809jls0202_2.CrossRefGoogle Scholar
  8. Burgess, T. (2007). Investigating the nature of teacher knowledge needed and used in teaching statistics (Doctoral thesis). Retrieved from http://www.stat.auckland.ac.nz/~iase/publications/dissertations/07.Burgess.Dissertation.pdf.
  9. Cobb, P. (2000). The importance of a situated view of learning to the design of research and instruction. In J. Boaler (Ed.), Multiple perspectives on mathematics teaching and learning (pp. 45–82). Westport, CT: Ablex.Google Scholar
  10. delMas, R. (2004). A comparison of mathematical and statistical reasoning. In D. Ben-Zvi & J. Garfield (Eds.), The challenge of developing statistical literacy, reasoning and thinking (pp. 79–95). Dordrecht, The Netherlands: Kluwer.CrossRefGoogle Scholar
  11. Franklin, C., & Garfield, J. (2006). The GAISE project. Developing statistics education guidelines for grades Pre-K–12 and college courses. In G. Burrill & P. Elliot (Eds.), Thinking and reasoning with data and chance: Sixty-eighth yearbook (pp. 345–375). Reston, VA: National Council of Teachers of Mathematics.Google Scholar
  12. Franklin, C., Kader, G., Mewborn, D., Moreno, J., Peck, R., Perry, M., et al. (2005). Guidelines for assessment and instruction in statistics education (GAISE) report: A pre-K–12 curriculum framework. Alexandria, VA: American Statistical Association.Google Scholar
  13. Graham, A. (2006). Developing thinking in statistics. London, England: Paul Chapman.Google Scholar
  14. Hancock, C., Kaput, J., & Goldsmith, L. (1992). Authentic inquiry with data: Critical barriers to classroom implementation. Educational Psychologist, 2(3), 337–364.  https://doi.org/10.1207/s15326985ep2703_5.CrossRefGoogle Scholar
  15. Konold, C., & Higgins, T. (2002). Highlights of related research. In S. J. Russell, D. Shifter, & V. Bastable (Eds.), Developing mathematical ideas: Collecting, representing, and analyzing data (pp. 165–201). Parsippany, NJ: Dale Seymour.Google Scholar
  16. Konold, C., & Higgins, T. (2003). Reasoning about data. In J. Kilpatrick, W. G. Martin, & D. Schifter (Eds.), A research companion to principles and standards for school mathematics (pp. 193–215). Reston, VA: National Council of Teachers of Mathematics.Google Scholar
  17. Lehrer, R., & Romberg, T. (1996). Exploring children’s data modeling. Cognition and Instruction, 14(1), 69–108.CrossRefGoogle Scholar
  18. MacKay, R., & Oldford, W. (1994). Stat 231 course notes fall 1994. Waterloo, Canada: University of Waterloo.Google Scholar
  19. Makar, K. (2015). Informal inferential reasoning. Keynote presentation to the Combined Hui, Auckland, New Zealand.Google Scholar
  20. Ministry of Education. (2007). The New Zealand curriculum. Wellington, New Zealand: Learning Media.Google Scholar
  21. Pfannkuch, M. (2006). Comparing boxplot distributions: A teacher’s reasoning. Statistics Education Research Journal, 5(2), 27–45.Google Scholar
  22. Pfannkuch, M., Arnold, P., & Wild, C. J. (2011). Statistics: It’s reasoning not calculating (Summary research report on Building students’ inferential reasoning: Levels 5 and 6). Retrieved from http://www.tlri.org.nz/tlri-research/research-completed/school-sector/building-students-inferential-reasoning-statistics.
  23. Pfannkuch, M., & Horring, J. (2005). Developing statistical thinking in a secondary school: A collaborative curriculum development. In G. Burrill & M. Camden (Eds.), Curricular development in statistics education: International Association for Statistical Education 2004 round table (pp. 204–218). Voorburg, The Netherlands: International Statistical Institute.Google Scholar
  24. Pfannkuch, M., Regan, M., Wild, C. J., & Horton, N. (2010). Telling data stories: Essential dialogues for comparative reasoning. Journal of Statistics Education, 18(1). http://www.amstat.org/publications/jse/v18n1/pfannkuch.pdf.
  25. Russell, S. J. (2006). What does it mean that “5 has a lot”? From the world to data and back. In G. Burrill & P. Elliot (Eds.), Thinking and reasoning with data and chance: Sixty-eighth Yearbook (pp. 17–29). Reston, VA: National Council of Teachers of Mathematics.Google Scholar
  26. Schwartz, D., Chang, J., & Martin, L. (2008). Instrumentation and innovation in design experiments. In A. E. Kelly, R. A. Lesh, & J. Y. Baek (Eds.), Handbook of design research methods in education: Innovations in science, technology, engineering, and mathematics learning and teaching (pp. 45–67). New York, NY: Routledge.Google Scholar
  27. Simon, M. A. (1995). Reconstructing mathematics pedagogy from a constructivist perspective. Journal for Research in Mathematics Education, 26(2), 114–145.CrossRefGoogle Scholar
  28. Whittin, D. (2006). Learning to talk back to a statistic. In G. Burrill & P. Elliot (Eds.), Thinking and reasoning with data and chance: Sixty-eighth yearbook (pp. 31–39). Reston, VA: National Council of Teachers of Mathematics.Google Scholar
  29. Wild, C. J., & Pfannkuch, M. (1999). Statistical thinking in empirical enquiry. International Statistical Review, 67(3), 223–265.  https://doi.org/10.1111/j.1751-5823.1999.tb00442.x.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Karekare EducationAucklandNew Zealand
  2. 2.Department of StatisticsThe University of AucklandAucklandNew Zealand

Personalised recommendations