Skip to main content

Using Daikon to Prioritize and Group Unit Bugs

  • Conference paper
  • First Online:
  • 522 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 8348))

Abstract

Unit testing and verification constitute an important step in the validation life cycle of large and complex multi-component software code bases. Many unit validation methods often suffer from the problem of false failure alarms, when they analyse a component in isolation and look for errors. It often turns out that some of the reported unit failures are infeasible, i.e. the valuations of the component input parameters that trigger the failure, though feasible on the unit module in isolation, cannot occur in practice considering the integrated code, in which the unit-under-test is instantiated. In this paper, we consider this problem in the context of a multi-function software code base, with a set of unit level failures reported on a specific function. We present here an automated two-stage failure classification and prioritization strategy that can filter out false alarms and classify them accordingly. Early experiments show interesting results.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Daikon Invariant Detector. http://groups.csail.mit.edu/pag/daikon/download/doc/daikon.html

  2. Klee. http://klee.llvm.org/

  3. Siemens Benchmarks. http://pleuma.cc.gatech.edu/aristotle/Tools/subjects/

  4. The yices smt solver. http://yices.csl.sri.com/

  5. Burnim, J., Sen, K.: Heuristics for scalable dynamic test generation. In: ASE, pp. 443–446 (2008)

    Google Scholar 

  6. Chaki, S., Clarke, E., Giannakopoulou, D., Psreanu, C.S.: Abstraction and assume-guarantee reasoning for automated software verification. Technical report (2004)

    Google Scholar 

  7. Dang, Y., Wu, R., Zhang, H., Zhang, D., Nobel, P.: Rebucket: a method for clustering duplicate crash reports based on call stack similarity. In: ICSE, pp. 1084–1093 (2012)

    Google Scholar 

  8. Elbaum, S.G., Malishevsky, A.G., Rothermel, G.: Test case prioritization: a family of empirical studies. IEEE Trans. Softw. Eng. 28(2), 159–182 (2002)

    Article  Google Scholar 

  9. Ernst, M.D., Perkins, J.H., Guo, P.J., McCamant, S., Pacheco, C., Tschantz, M.S., Xiao, C.: The daikon system for dynamic detection of likely invariants. Sci. Comput. Program. 69(1–3), 35–45 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  10. Ghosh, A.K., Chaudhuri, P., Murthy, C.A.: On visualization and aggregation of nearest neighbor classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1592–1602 (2005)

    Article  Google Scholar 

  11. Henzinger, T.A., Jhala, R., MAjumdar, R., Sutre, G.: Lazy abstraction. In: POPL, pp. 58–70 (2002)

    Google Scholar 

  12. Kim, D., Wang, X., Kim, S., Zeller, A., Cheung, S.C., Park, S.: Which crashes should I fix first?: predicting top crashes at an early stage to prioritize debugging efforts. IEEE Trans. Softw. Eng. 37(3), 430–447 (2011)

    Article  Google Scholar 

  13. Kremenek, T., Ashcraft, K., Yang, J., Engler, D.R.: Correlation exploitation in error ranking. In: SIGSOFT FSE, pp. 83–93 (2004)

    Google Scholar 

  14. Mitra, S., Banerjee, A., Dasgupta, P.: Formal methods for ranking counterexamples through assumption mining. In: DATE, pp. 911–916 (2012)

    Google Scholar 

  15. Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Test case prioritization: an empirical study. In: ICSM, pp. 179–188 (1999)

    Google Scholar 

  16. Sen, K., Agha, G.: CUTE and jCUTE: concolic unit testing and explicit path model-checking tools. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 419–423. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  17. Sen, K., Marinov, D., Agha, G.: Cute: a concolic unit testing engine for c. In: ESEC/SIGSOFT FSE, pp. 263–272 (2005)

    Google Scholar 

  18. Seo, H., Kim, S.: Predicting recurring crash stacks. In: ASE, pp. 180–189 (2012)

    Google Scholar 

  19. Shen, H., Fang, J., Zhao, J.: Efindbugs: effective error ranking for findbugs. In: ICST, pp. 299–308 (2011)

    Google Scholar 

Download references

Acknowledgement

This work was partially supported by The Open Project of Shanghai Key Laboratory of Trustworthy Computing (No. 07dz22304201201).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ansuman Banerjee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Jain, N., Dutta, S., Banerjee, A., Ghosh, A.K., Xu, L., Zhu, H. (2014). Using Daikon to Prioritize and Group Unit Bugs. In: Fiadeiro, J., Liu, Z., Xue, J. (eds) Formal Aspects of Component Software. FACS 2013. Lecture Notes in Computer Science(), vol 8348. Springer, Cham. https://doi.org/10.1007/978-3-319-07602-7_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-07602-7_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-07601-0

  • Online ISBN: 978-3-319-07602-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics