Abstract
A large-scale research project involving many research and industry organizations working on a common goal should be an ideal basis for profound technology evaluations. The possibility for industrial case studies in multiple settings ought to enable reliable quantitative assessment of the performance of new technologies in various real-world settings. However, due to diverse challenges, such as internal agendas, implicit constraints, and unaligned objectives, leveraging this potential goes beyond the usual challenge of cat-herding in such projects. Based on our experience from coordinating technology evaluations in several research projects, this paper sketches the typical issues and outlines an approach for dealing with them. Although new in its composition, this approach brings together principles and techniques perceived to have been useful in earlier projects (e.g., cross-organizational alignment, abstract measures, and internal baselining). Moreover, as we are currently applying the approach in a large research project, this paper presents first insights into its applicability and usefulness.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Basili, V.: Quantitative Evaluation of Software Engineering Methodology. In: Proc. of 1st Conf. on Pan Pacific Computer (1984)
Basili, V., et al.: Linking Software Development and Business Strategy through Measurement. Computer 43, 57–65 (2010)
Ciolkowski, M., Heidrich, J., Simon, F., Radicke, M.: Empirical results from using custom-made software project control centers in industrial environments. In: Proc. of the 2nd Int. Symp. on Empirical Software Engineering and Measurement, pp. 243–252 (2008)
Davis, A., Dieste, O., Hickey, A., Juristo, N., Moreno, A.: Effectiveness of Requirements Elicitation Techniques: Empirical Results Derived from a Systematic Review. In: Proc. of 14th IEEE Int. Conf. on Requirements Engineering, pp. 179–188 (2006)
Fenton, N., Pfleeger, S.: Software metrics – a rigorous and practical approach, 2nd edn. PWS publishing group, Boston (1997)
Greene, D., David, J.: A research design for generalizing from multiple case studies. Evaluation and Program Planning 7, 73–85 (1984)
Jung, J., Nunnenmacher, S., Ciolkowski, M.: A Constructive Approach towards Defining Baselines in Empirical Software Engineering. Public Fraunhofer IESE report (2012)
Kitchenham, B., et al.: Systematic literature reviews in software engineering – A tertiary study. Information and Software Technology 52, 792–805 (2010)
Kitchenham, B., Pickard, L., Pfleeger, S.: Case studies for method and tool evaluation. IEEE Software 12, 52–62 (1995)
Kontio, J., Lehtola, L., Bragge, J.: Using the Focus Group Method in Software Engineering: Obtaining Practitioner and User Experiences. In: Proc. of the Int. Symp. on Empirical Software Engineering, pp. 271–280 (2004)
Lindstrom, D.: Five Ways to Destroy a Development Project. IEEE Software 10, 55–58 (1993)
McLeod, L., MacDonell, S., Doolin, B.: Qualitative research on software development: a longitudinal case study methodology. Empirical Software Engineering 16, 430–459 (2011)
Mohagheghi, P., Conradi, R.: Quality, productivity and economic benefits of software reuse: a review of industrial studies. Empirical Software Engineering 12, 471–516 (2007)
Mohagheghi, P., Gilani, W., Stefanescu, A., Fernandez, M.: An empirical study of the state of the practice and acceptance of model-driven engineering in four industrial cases. In: Empirical Software Engineering (2012) (online first)
Neto, A., et al.: Improving Evidence about Software Technologies: A Look at Model-Based Testing. IEEE Software 25, 10–13 (2008)
Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering 14, 131–164 (2009)
Santos, P., Travassos, G.: Action research use in software engineering: An initial survey. In: Proc. of 3rd Int. Symp. on Empirical Software Engineering and Measurement, pp. 414–417 (2009)
Verner, J., et al.: Guidelines for industrially-based multiple case studies in software engineering. In: Proc. of 3rd Int. Conf. on Research Challenges in Information Science, pp. 313–324 (2009)
Williams, L., Krebs, W., Layman, L., Antón, A., Abrahamsson, P.: Toward a framework for evaluating extreme programming. In: Proc. of 8th Int. Conf. on Empirical Assessment in Software Engineering, pp. 11–20 (2004)
Williams, L., Layman, L., Abrahamsson, P.: On establishing the essential components of a technology-dependent framework: a strawman framework for industrial case study-based research. ACM SIGSOFT Software Engineering Notes 30, 1–5 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kläs, M., Bauer, T., Tiberi, U. (2013). Beyond Herding Cats: Aligning Quantitative Technology Evaluation in Large-Scale Research Projects. In: Heidrich, J., Oivo, M., Jedlitschka, A., Baldassarre, M.T. (eds) Product-Focused Software Process Improvement. PROFES 2013. Lecture Notes in Computer Science, vol 7983. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39259-7_9
Download citation
DOI: https://doi.org/10.1007/978-3-642-39259-7_9
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39258-0
Online ISBN: 978-3-642-39259-7
eBook Packages: Computer ScienceComputer Science (R0)