Towards the Measuring Criteria of IT Project Success in University Context

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 746)

Abstract

Commercial projects are carried out according to the rules of a certain software development approach but the academic projects do not always adhere to any formal processes. So far little attention has been paid to the same problem in academic context. By investigation of their assessment criteria in commercial context a set of metrics and measures was determined and adapted to provide a structured evaluation approach for projects developed in academic setting. Professionalizing teaching and assessment process is an attempt to close a gap between workforce’s expectations towards new graduates and the outcomes of their university education.

Keywords

Information technology projects Project quality Project efficiency Measuring criteria Academia context 

References

  1. 1.
    Ralph, P., Kelly, P.: The Dimensions of Software Engineering Success (2014)Google Scholar
  2. 2.
    Unterkalmsteiner, M., Gorschek, T., Moinul Islam, A.K.M.: Evaluation and Measurement of Software Process Improvement - A Systematic Literature Review (2011)Google Scholar
  3. 3.
    Dalcher, D., Benediktsson, O., Thorbergsson, H.: Development Life Cycle Management: A Multiproject Experiment (2005)Google Scholar
  4. 4.
    Dalcher, D.: Life Cycle Design and Management (2002)Google Scholar
  5. 5.
    Buse, R., Zimmermann, T.: Information needs for software development analytics. In: Proceedings of 20th International Conference on Software Engineering, pp. 987–996. IEEE Press (2012)Google Scholar
  6. 6.
    Macias, F., Holcombe, M., Gheorghe, M.: A Formal Experiment Comparing Extreme Programming with Traditional Software Construction (2003)Google Scholar
  7. 7.
    Bruegge, B., Krusche, S., Alperowitz, L.: Software Engineering Project Courses with Industrial Clients (2015)CrossRefGoogle Scholar
  8. 8.
    Naboulsi, Z.: Code Metrics - Cyclomatic Complexity. MSDN Ultimate Visual Studio Tips and Tricks Blog (2017)Google Scholar
  9. 9.
    McCabe Associates: “Integrated Quality” as part of CS699 Professional Seminar in Computer Science (1999)Google Scholar
  10. 10.
    Kitchenham, B., Pfleeger, S.L.: Software quality: the Elusive target. IEEE Soft. 13, 12–21 (1996)CrossRefGoogle Scholar
  11. 11.
    Rosenberg, L., Hammer, T.: Software metrics and reliability. NASA GSFC (1998)Google Scholar
  12. 12.
    Rosenberg, L., Hammer, T.: Metrics for quality assurance and risk assessment. In: Proceedings of 11th International Software Quality Week, USA (1998)Google Scholar
  13. 13.
    Kemerer, C.F., Chidamber, S.R.: A metrics suite for object oriented design. IEEE Trans. Soft. Eng. 20, 476–493 (1994)CrossRefGoogle Scholar
  14. 14.
    Olszewska, M., Heidenberg, J., Weijola, M.: Quantitatively measuring a large-scale agile transformation. J. Syst. Soft. 117, 258–273 (2016)CrossRefGoogle Scholar
  15. 15.
    Hoegl, M., Gemuenden, H.G.: Teamwork quality and the success of innovative projects: a theoretical concept and empirical evidence. Organ. Sci. 12, 435–449 (2001)CrossRefGoogle Scholar
  16. 16.
    Carron, A., Brawley, L.: Cohesion: conceptual and measurement issues. Small Group Res. 31, 89–106 (2000)CrossRefGoogle Scholar
  17. 17.
    Salas, E., Grossman, R.: Measuring team cohesion: observations from the science. Hum. Fact. 57, 365–374 (2015)CrossRefGoogle Scholar
  18. 18.
    Sommerville, I.: Software Engineering 7. Addison-Wesley, Boston (2004)MATHGoogle Scholar
  19. 19.
    Casey-Campbell, M., Martens, M.L.: Sticking it all together: a critical assessment of the group cohesion-performance literature. Int. J. Manage. Rev. 11(2), 223–246 (2009)CrossRefGoogle Scholar
  20. 20.
    Wellington, C.A., Briggs, T.: Comparison of student experiences with plan-driven and agile methodologies. In: 35th ASEE/IEEE Frontiers in Education Conference (2015)Google Scholar
  21. 21.
    Carron, A.V., Brawley, L.R.: G.E.Q. The Group Environment Questionnaire Test Manual. Fitness Information Technology Inc. (2002)Google Scholar
  22. 22.
    Unterkalmsteiner, M., Gorschek, T.: Evaluation and measurement of software process improvement - a systematic literature review. IEEE Trans. Soft. Eng. 38, 398–424 (2012)CrossRefGoogle Scholar
  23. 23.
    Ilieva, S., Ivanov, P., Stefanova, E.: Analyses of an agile methodology implementation. In: Proceedings of 30th EUROMICRO Conference (2004)Google Scholar
  24. 24.
    Abrahamsson, P.: Extreme programming: first results from a controlled case study. In: Proceedings of 29th EUROMICRO Conference (2003)Google Scholar
  25. 25.
    Middleton, P., Taylor, P.S.: Lean principles and techniques for improving the quality and productivity of software development projects: a case study. Int. J. Prod. Qual. Manage. 2, 387–403 (2007)Google Scholar
  26. 26.
    Ochodek, M., Nawrocki, J.: Simplifying effort estimation based on use case points. Inf. Soft. Technol. 53, 200–213 (2011)CrossRefGoogle Scholar
  27. 27.
    Chillarege, R., Bhandari, I.S.: Orthogonal defect classification-a concept for in-process measurements. IEEE Trans. Soft. Eng. 18, 943–956 (1992)CrossRefGoogle Scholar
  28. 28.
    Butcher, M., Munro, H.: Improving software testing via ODC: three case studies. IBM Syst. J. 41, 31–44 (2002)CrossRefGoogle Scholar
  29. 29.
    Begel, A., Simon, B.: Struggles of new college graduates in their first software development job. In: Proceedings of 39th SIGCSE Technical Symposium on Computer Science Education (2008)Google Scholar
  30. 30.
    Brechner, E.: Things they would not teach me of in college: what Microsoft developers learn later. In: ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (2003)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Lodz University of TechnologyLodzPoland

Personalised recommendations