Advertisement

Accurately Choosing Execution Runs for Software Fault Localization

  • Liang Guo
  • Abhik Roychoudhury
  • Tao Wang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3923)

Abstract

Software fault localization involves locating the exact cause of error for a “failing” execution run – a run which exhibits an unexpected behavior. Given such a failing run, fault localization often proceeds by comparing the failing run with a “successful” run, that is, a run which does not exhibit the unexpected behavior. One important issue here is the choice of the successful run for such a comparison. In this paper, we propose a control flow based difference metric for this purpose. The difference metric takes into account the sequence of statement instances (and not just the set of these instances) executed in the two runs, by locating branch instances with similar contexts but different outcomes in the failing and the successful runs. Given a failing run π f and a pool of successful runs S, we choose the successful run π s from S whose execution trace is closest to π f in terms of the difference metric. A bug report is then generated by returning the difference between π f and π s . We conduct detailed experiments to compare our approach with previously proposed difference metrics. In particular, we evaluate our approach in terms of (a) effectiveness of bug report for locating the bug, (b) size of bug report and (c) size of successful run pool required to make a decent choice of successful run.

Keywords

Programming tools Debugging 

References

  1. 1.
    Ball, T., Naik, M., Rajamani, S.K.: From symptom to cause: localizing errors in counterexample traces. In: ACM SIGPLAN-SIGACT symposium on Principles of programming languages (POPL), pp. 97–105 (2003)Google Scholar
  2. 2.
    Chaki, S., Groce, A., Strichman, O.: Explaining abstract counterexamples. In: ACM SIGSOFT Symp. on the Foundations of Software Engg (FSE) (2004)Google Scholar
  3. 3.
    Cleve, H., Zeller, A.: Locating causes of program failures. In: ACM/IEEE International Conference on Software Engineering (ICSE) (2005)Google Scholar
  4. 4.
    Ferrante, J., Ottenstein, K.J., Warren, J.D.: The program dependence graph and its use in optimization. ACM Transactions on Programming Languages and Systems 9(3), 319–349 (1987)CrossRefzbMATHGoogle Scholar
  5. 5.
    Groce, A.: Error explanation with distance metrics. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 108–122. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  6. 6.
    Hutchins, M., Foster, H., Goradia, T., Ostrand, T.: Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 191–200 (1994)Google Scholar
  7. 7.
    Jones, J.A., Harrold, M.J., Stasko, J.: Visualization of test information to assist fault localization. In: ACM/IEEE International Conference on Software Engineering (ICSE), pp. 467–477 (2002)Google Scholar
  8. 8.
    Larus, J.R.: Whole program paths. In: ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pp. 259–269 (1999)Google Scholar
  9. 9.
    Liblit, B., Aiken, A., Zheng, A., Jordan, M.I.: Bug isolation via remote program sampling. In: ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) (2003)Google Scholar
  10. 10.
    Liblit, B., Naik, M., Zheng, A., Aiken, A., Jordan, M.: Scalable statistical bug isolation. In: ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) (2005)Google Scholar
  11. 11.
    Pytlik, B., Renieris, M., Krishnamurthi, S., Reiss, S.P.: Automated fault localization using potential invariants. CoRR, cs.SE/0310040 (October 2003)Google Scholar
  12. 12.
    Renieris, M., Reiss, S.P.: Fault localization with nearest neighbor queries. In: Automated Software Engineering (ASE), pp. 30–39 (2003)Google Scholar
  13. 13.
    Reps, T.W., Ball, T., Das, M., Larus, J.R.: The use of program profiling for software maintenance with applications to the year 2000 problem. In: ACM SIGSOFT Symp. on the Foundations of Software Engg. (FSE) (1997)Google Scholar
  14. 14.
    Rothermel, G., Harrold, M.J.: Empirical studies of a safe regression test selection technique. IEEE Transactions on Software Engineering 24 (1998)Google Scholar
  15. 15.
    Ruthruff, J., Creswick, E., Burnett, M., Cook, C., Prabhakararao, S., Fisher II, M., Main, M.: End-user software visualizations for fault localization. In: ACM Symposium on Software Visualization, pp. 123–132 (2003)Google Scholar
  16. 16.
  17. 17.
    Wang, T., Roychoudhury, A.: Automated path generation for software fault localization. In: ACM/IEEE International Conference on Automated Software Engineering (ASE), pp. 347–351 (2005)Google Scholar
  18. 18.
    Zeller, A.: Isolating cause-effect chains from computer programs. In: ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), pp. 1–10 (2002)Google Scholar
  19. 19.
    Zeller, A., Hildebrandt, R.: Simplifying and isolating failure-inducing input. IEEE Transactions on Software Engineering 28 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Liang Guo
    • 1
  • Abhik Roychoudhury
    • 1
  • Tao Wang
    • 1
  1. 1.School of ComputingNational University of SingaporeSingapore

Personalised recommendations