Validity Evaluation

  • Miroslaw Staron


Conducting a research study is always linked to questions about whether we can trust the results or not. Since the goal of each action research project is to make software engineering practices and tools better, we need to be able to assess the validity of our research finding very critically. Therefore, we need to be able to combine the impact of the research results with the limitations of it. We need to be able to provide the stakeholders of the action research projects with a solid and as-objective-as-possible account of the research validity.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [CC79]
    Thomas D Cook and Donald T Campbell. The design and conduct of true experiments and quasi-experiments in field settings. In Reproduced in part in Research in Organizations: Issues and Controversies. Goodyear Publishing Company, 1979.Google Scholar
  2. [CS63]
    Donald T Campbell and Julian C Stanley. Experimental and quasi-experimental designs for research. Ravenio Books, 1963.Google Scholar
  3. [FSHL13]
    Robert Feldt, Miroslaw Staron, Erika Hult, and Thomas Liljegren. Supporting software decision meetings: Heatmaps for visualising test and code measurements. In 2013 39th Euromicro Conference on Software Engineering and Advanced Applications, pages 62–69. IEEE, 2013.Google Scholar
  4. [NB05]
    Nachiappan Nagappan and Thomas Ball. Use of relative code churn measures to predict system defect density. In Proceedings of the 27th international conference on Software engineering, pages 284–292. ACM, 2005.Google Scholar
  5. [NB07]
    Nachiappan Nagappan and Thomas Ball. Using software dependencies and churn metrics to predict field failures: An empirical case study. In First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), pages 364–373. IEEE, 2007.Google Scholar
  6. [SDR17]
    Miroslaw Staron, Darko Durisic, and Rakesh Rana. Improving measurement certainty by using calibration to find systematic measurement error—a case of lines-of-code measure. In Software Engineering: Challenges and Solutions, pages 119–132. Springer, 2017.Google Scholar
  7. [SHF+13]
    Miroslaw Staron, Jörgen Hansson, Robert Feldt, Anders Henriksson, Wilhelm Meding, Sven Nilsson, and Christoffer Höglund. Measuring and visualizing code stability–a case study at three companies. In 2013 Joint Conference of the 23rd International Workshop on Software Measurement and the 8th International Conference on Software Process and Product Measurement, pages 191–200. IEEE, 2013.Google Scholar
  8. [SM08]
    Miroslaw Staron and Wilhelm Meding. Predicting weekly defect inflow in large software projects based on project planning and test status. Information and Software Technology, 50(7–8):782–796, 2008.CrossRefGoogle Scholar
  9. [SMP12]
    Miroslaw Staron, Wilhelm Meding, and Klas Palm. Release readiness indicator for mature agile and lean software development projects. In International Conference on Agile Software Development, pages 93–107. Springer, 2012.Google Scholar
  10. [SMS10]
    Miroslaw Staron, Wilhelm Meding, and Bo Söderqvist. A method for forecasting defect backlog in large streamline software development projects and its industrial evaluation. Information and Software Technology, 52(10):1069–1079, 2010.CrossRefGoogle Scholar
  11. [WRH+12]
    Claes Wohlin, Per Runeson, Martin Höst, Magnus C Ohlsson, Björn Regnell, and Anders Wesslén. Experimentation in software engineering. Springer, 2012.Google Scholar
  12. [YO10]
    Chong-ho Yu and Barbara Ohlund. Threats to validity of research design. Retrieved March, 24:2019, 2010.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Miroslaw Staron
    • 1
  1. 1.Department of Computer Science and EngineeringUniversity of GothenburgGothenburgSweden

Personalised recommendations