Data Processing, Systematic Errors, and Validity of Conclusions in Education Research

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 716)

Abstract

The aim of the present study is the investigation of the validity of various results in education research, due to the way these have been derived from the data taken. The use of simple statistical tools for data analysis is seen as the mail culprit, leading to results that are not supported by the quality of the data taken. The repeated use of purely statistical tools, albeit simple in execution and convenient as they really are, ignores the presence of systematic (non-statistical) measurement errors. It is precisely this failure during data analysis that very often leads to erroneous results. The non-repeatability of various experiments is thus explained, while some suggestions are offered to improve the situation.

Systematic error estimation and Random error computation should be used to determine the total experimental error (or uncertainty) for each and every data-point of primary experimental data-plot. Subsequently, full error-propagation techniques need to be used to find the final error for each point, so as these can be reliably utilised to make valid comparisons and finally derive valid experimental results.

Naive utilisation of simplistic statistical functions is to be curtailed, while more precise information needs to be factored in and be properly evaluated, so as conclusions would be unequivocally valid.

Keywords

Data processing Education research Statistical errors Error propagation Research methodology 

References

  1. 1.
    Ioannidis, J.P.A.: Why most published research findings are false. PLoS Med. 2(8), e124, 696–701 (2005). http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
  2. 2.
    Sackett, D.L.: Bias in analytic research. J. Chronic Dis. 32, 51–63 (1979)CrossRefGoogle Scholar
  3. 3.
    Tavakol, M., Mohagheghi, M.A., Dennick, R.: Assessing the skills of surgical residents using simulation. J. Surg. Educ. 65(2), 77–83 (2008)CrossRefGoogle Scholar
  4. 4.
    Cortina, J.: What is coefficient alpha: an examination of theory and applications. J. Appl. Psychol. 78, 98–104 (1993)CrossRefGoogle Scholar
  5. 5.
    Streiner, D.L.: Starting at the beginning: an introduction to coefficient alpha and internal consistency. J. Pers. Assess. 80(1), 99–103 (2003)CrossRefGoogle Scholar
  6. 6.
    Schmitt, N.: Uses and abuses of coefficient alpha. Psychol. Assess. 8(4), 350–353 (1996)CrossRefGoogle Scholar
  7. 7.
    Gigerenzer, G., et al.: On the tyranny of hypothesis testing in the social sciences. Contemp. Psychol. 36(2), 102–105 (1991)Google Scholar
  8. 8.
    Gigerenzer, G., et al.: The Empire of Chance: How Probability Changed Science and Everyday Life, (Ideas in context vol. 12). Cambridge University Press, Cambridge, New York, Melbourne (1989)Google Scholar
  9. 9.
    Green, S., Thompson, M.: Structural equation modelling in clinical psychology research. In: Roberts, M., Ilardi, S. (eds.) Handbook of Research in Clinical Psychology, pp. 138–175. Wiley-Blackwell, Oxford (2005)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.The Science LaboratoryUniversity of PatrasPatrasGreece

Personalised recommendations