Skip to main content
Log in

Source Code Prioritization Using Forward Slicing for Exposing Critical Elements in a Program

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Even after thorough testing, a few bugs still remain in a program with moderate complexity. These residual bugs are randomly distributed throughout the code. We have noticed that bugs in some parts of a program cause frequent and severe failures compared to those in other parts. Then, it is necessary to take a decision about what to test more and what to test less within the testing budget. It is possible to prioritize the methods and classes of an object-oriented program according to their potential to cause failures. For this, we propose a program metric called influence metric to find the influence of a program element on the source code. First, we represent the source code into an intermediate graph called extended system dependence graph. Then, forward slicing is applied on a node of the graph to get the influence of that node. The influence metric for a method m in a program shows the number of statements of the program which directly or indirectly use the result produced by method m. We compute the influence metric for a class c based on the influence metric of all its methods. As influence metric is computed statically, it does not show the expected behavior of a class at run time. It is already known that faults in highly executed parts tend to more failures. Therefore, we have considered operational profile} to find the average execution time of a class in a system. Then, classes are prioritized in the source code based on influence metric and average execution time. The priority of an element indicates the potential of the element to cause failures. Once all program elements have been prioritized, the testing effort can be apportioned so that the elements causing frequent failures will be tested thoroughly. We have conducted experiments for two well-known case studies — Library Management System and Trading Automation System — and successfully identified critical elements in the source code of each case study. We have also conducted experiments to compare our scheme with a related scheme. The experimental studies justify that our approach is more accurate than the existing ones in exposing critical elements at the implementation level.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Petschenik N H. Practical priorities in system testing. IEEE Softw., 1985, 2(5): 18–23.

    Article  Google Scholar 

  2. Mall R. Fundamentals of Software Engineering, Third Ed. Prentice Hall, India, 2009.

    Google Scholar 

  3. Adams E N. Optimizing preventive service of software products. IBM Journal for Research and Development, 1984, 28(1): 3–14.

    Google Scholar 

  4. Boehm B, Basili V R. Software defect reduction top 10 list. Computer, 2001, 34(1): 135–137.

    Article  Google Scholar 

  5. Sommerville I. Software Engineering, 5th Ed. Chapter 18, Pearson, 1995.

  6. Musa J D. Operational profiles in software-reliability engineering. IEEE Softw., 1993, 10(2): 14–32.

    Article  Google Scholar 

  7. Cobb R H, Mills H D. Engineering software under statistical quality control. IEEE Softw., 1990, 7(6): 4454.

    Article  Google Scholar 

  8. Musa J D. Software Reliability Engineering: More Reliable Software Faster and Cheaper. AuthorHouse, 2004.

  9. Foyen A, Arisholm E, Briand L C. Dynamic coupling measurement for object-oriented software. IEEE Trans. Softw. Eng., 2004, 30(8): 491–506.

    Article  Google Scholar 

  10. Briand L C, Daly J W, Wust J K. A unified framework for coupling measurement in object-oriented systems. IEEE Transactions on Software Engineering, 1999, 25(1): 91–121.

    Article  Google Scholar 

  11. Yacoub S M, Ammar H H, Robinson T. Dynamic metrics for object-oriented designs. In Proc. the 6th International Symposium on Software Metrics (METRICS 1999), Boca Raton, USA, Nov. 4–6, 1999, pp.50-61.

  12. Briand L C, Wuest J, Lounis H. Using coupling measurement for impact analysis in object-oriented systems. In Proc. the IEEE International Conference on Software Maintenance, Oxford, UK, Aug. 30-Sept. 3, 1999, pp.475-482.

  13. Weiser M. Program slicing. IEEE Trans Software Eng., July 1984, 10(4): 352–357.

    Article  Google Scholar 

  14. Srikant Y N, Shankar P (eds.). The Compiler Design Handbook: Optimizations and Machine Code Generation. CRC Press, 2002.

  15. Ferrante J, Ottenstein K J, Warren J D. The program dependence graph and its use in optimization. ACM Trans. Program. Lang. Syst., 1987, 9(3): 319–349.

    Article  MATH  Google Scholar 

  16. Horwitz S, Reps T, Binkley D. Interprocedural slicing using dependence graphs. In Proc. the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 1988), Atlanta, USA, Jun. 22–24, 1988, pp.35-46.

  17. Larsen L, Harrold M J. Slicing object-oriented software. In Proc. the 18th International Conference on Software Engineering (ICSE 1996), Berlin, Germany, Mar. 25–29, 1996, pp.495-505.

  18. Rothermel G, Untch R H, Chu C, Harrold M J. Prioritizing test cases for regression testing. IEEE Transactions on Software Engineering, 2001, 27(10): 929–948.

    Article  Google Scholar 

  19. Malloy B A, McGregor J D, Krishnaswamy A, Medikonda M. An extensible program representation for object-oriented software. SIGPLAN Not., 1994, 29(12): 38–47.

    Article  Google Scholar 

  20. Liang D, Harrold M J. Slicing objects using system dependence graphs. In Proc. the International Conference on Software Maintenance (ICSM 1998), Bethesda, USA, Nov. 16–19, 1998, pp.358-367.

  21. Cheung R C. A user-oriented software reliability model. IEEE Trans. Software Eng., March 1980, 6(2): 118–125.

    Article  Google Scholar 

  22. Mills H D, Dyer M, Linger R C. Cleanroom software engineering. IEEE Software, 1987, 4(5): 19–25.

    Article  Google Scholar 

  23. Gittens M. The extended operational profile model for usage-based software testing [Ph.D. Dissertation]. Faculty of Graduate Studies, University of Western Ontario, 2004.

  24. Whittaker J A, Thomason M G. A Markov chain model for statistical software testing. IEEE Trans. Softw. Eng., 1994, 20(10): 812–824.

    Article  Google Scholar 

  25. Jacobson I, Christerson M. Object-Oriented Software Engineering: A Use Case Driven Approach. Addison-Wesley, 1992.

  26. Procedures for performing a failure mode, effects, and criticality analysis. Department of Defense, US MIL STD 1629A/Notice 2, Nov. 1984.

  27. Mohapatra D P, Mall R, Kumar R. An overview of slicing techniques for object-oriented programs. Informatica, 2006, 30(2): 253–277.

    MATH  Google Scholar 

  28. Henry S, Kafura D. Software structure metrics based on information flow. IEEE Trans. Software Engineering, 1981, 7(5): 510–518.

    Article  Google Scholar 

  29. Boehm B, Clark B, Horowitz E, Westland C, Madachy R, Selby R. Cost models for future software life cycle processes: Cocomo 2.0. Annals of Software Engineering, 1995, 1(1): 57–94.

    Article  Google Scholar 

  30. Point O. http://sunset.usc.edu/csse/research/cocomoii/cocomo_main.html, 2008.

  31. John S K, Clark J A, Mcdermid J A. Class mutation: Mutation testing for object-oriented programs. In Proc. Net. ObjectDays, 2000, pp.9-12.

  32. Li J J. Prioritize code for testing to improve code coverage of complex software. In Proc. the 16th IEEE International Symposium on Software Reliability Engineering (ISSRE 2005), Chicago, USA, Nov. 8–11, 2005, pp.75-84.

  33. Li J, Weiss D, Yee H. Code-coverage guided prioritized test generation. Information and Software Technology, 2006, 48(12): 11871198.

    Article  Google Scholar 

  34. Littlewood B. A reliability model for systems with Markov structure. Journal of the Royal Statistical Society, Series C (Applied Statistics), 1975, 24(2): 172–177.

    MathSciNet  Google Scholar 

  35. Booth T. Performance optimization of software systems processing information sequences modeled by probabilistic languages. IEEE Transactions on Software Engineering, Jan. 1979, 5(1): 31–44.

    Article  Google Scholar 

  36. Lo J H, Kuo S Y, Lyu M R, Huang C Y. Optimal resource allocation and reliability analysis for component-based software applications. In Proc. the 26th Annual International Computer Software and Applications Conference (COMPSAC 2002), Oxford, England, Aug. 26–29, 2002, pp.7-12.

  37. Elbaum S, Malishevsky A G, Rothermel G. Test case prioritization: A family of empirical studies. IEEE Trans. Softw. Eng., 2002, 28(2): 159–182.

    Article  Google Scholar 

  38. Elbaum S, Malishevsky A, Rothermel G. Incorporating varying test costs and fault severities into test case prioritization. In Proc. the 23rd International Conference on Software Engineering (ICSE 2001), Toronto, Canada, May 12–19, 2001, pp.329-338.

  39. Jeffrey D, Gupta N. Experiments with test case prioritization using relevant slices. J. Syst. Softw., 2008, 81(2): 196–221.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Durga Prasad Mohapatra.

Additional information

This work is supported by grants from the Department of Science and Technology, Government of India under SERC Project.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ray, M., lal Kumawat, K. & Mohapatra, D.P. Source Code Prioritization Using Forward Slicing for Exposing Critical Elements in a Program. J. Comput. Sci. Technol. 26, 314–327 (2011). https://doi.org/10.1007/s11390-011-9438-1

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-011-9438-1

Keywords

Navigation