A Method for Scalable and Precise Bug Finding Using Program Analysis and Model Checking

  • Manuel Valdiviezo
  • Cristina Cifuentes
  • Padmanabhan Krishnan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8858)


This paper presents a technique for defect detection in large code bases called model-based analysis. It incorporates ideas and techniques from program analysis and model checking. Model checking, while very precise, is unable to handle large code bases that are in the millions of lines of code. Thus we create a number of abstract programs from the large code base which can all be model checked. In order to create these abstract programs, we first identify potential defects quickly via static analysis. Second we create a program slice containing one potential defect. Each slice is then abstracted using a combination of automatic data and predicate abstraction. This abstracted model is then model checked to verify the existence or absence of the defect. By applying model checking to a large number of small models instead of one single large model makes our approach scalable without compromising on precision.

We have applied our analysis to detect memory leaks and implemented it using aspects of the Parfait static code analysis tool and the SPIN model checker. Results show that our approach scales to large code bases and has good precision: the analysis runs over 1 million lines of non-commented C++ OpenJDKTM source code in 1 hour and 19 minutes, with a precision of 84.5%. Further, our analysis found 62.2% more defects when compared to the dataflow approach used by Oracle Parfait’s memory leak checker.


Model Check Boolean Variable Data Abstraction Memory Allocation Potential Defect 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Ball, T., Levin, V., Rajamani, S.K.: A decade of software model checking with SLAM. Communications of the ACM 54, 68–76 (2011)CrossRefGoogle Scholar
  2. 2.
    Merz, F., Falke, S., Sinz, C.: LLBMC: Bounded model checking of C and C++ programs using a compiler IR. In: Joshi, R., Müller, P., Podelski, A. (eds.) VSTTE 2012. LNCS, vol. 7152, pp. 146–161. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  3. 3.
    Clarke, E., Kroning, D., Lerda, F.: A tool for checking ANSI-C programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Corbett, J.C., Dwyer, M.B., Hatcliff, J., Laubach, S., Pasareanu, C.S., Robby, Hongjun, Z.: Bandera: Extracting finite-state models from Java source code. In: Proceedings of the International Conference on Software Engineering, pp. 439–448 (2000)Google Scholar
  5. 5.
    Das, M., Lerner, S., Seigle, M.: ESP: Path-sensitive program verification in polynomial time. In: Proceedings of the Conference on Programming Language Design and Implementation (PLDI), pp. 57–68. ACM Press (June 2002)Google Scholar
  6. 6.
    Dor, N., Adams, S., Das, M., Yang, Z.: Software validation via scalable path-sensitive value flow analysis. In: Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA), pp. 12–22. ACM (2004)Google Scholar
  7. 7.
    Jung, Y., Yi, K.: Practical memory leak detector based on parameterized procedural summaries. In: Proceedings of the 7th International Symposium on Memory Management (ISMM), pp. 131–140 (2008)Google Scholar
  8. 8.
    Sui, Y., Ye, D., Xue, J.: Static memory leak detection using full-sparse value-flow analysis. In: Proceedings of the 2012 International Symposium on Software Testing and Analysis (ISSTA), pp. 254–264. ACM (2012)Google Scholar
  9. 9.
    Yatapanage, N., Winter, K., Zafar, S.: Slicing behavior tree models for verification. In: Calude, C.S., Sassone, V. (eds.) TCS 2010. IFIP AICT, vol. 323, pp. 125–139. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  10. 10.
    Park, M., Byun, T., Choi, Y.: Property-based code slicing for efficient verification of OSEK/VDX operating systems. In: Proceedings of the First International Workshop on Formal Techniques for Safety-Critical Systems (FTSCS), pp. 69–84 (2012)Google Scholar
  11. 11.
    Kim, Y., Lee, J., Han, H., Choe, K.M.: Filtering false alarms of buffer overflow analysis using SMT solvers. Information and Software Technology 52(2), 210–219 (2010)CrossRefGoogle Scholar
  12. 12.
    Chebaro, O., Kosmatov, N., Giorgetti, A., Julliand, J.: Program slicing enhances a verification technique combining static and dynamic analysis. In: Proceedings of the ACM Symposium on Applied Computing (SAC), pp. 1284–1291 (2012)Google Scholar
  13. 13.
    Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Software verification with BLAST. In: Ball, T., Rajamani, S.K. (eds.) SPIN 2003. LNCS, vol. 2648, pp. 235–239. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  14. 14.
    Cytron, R., Ferrante, J., Rosen, B.K., Wegman, M.N., Zadeck, F.K.: Efficiently computing static single assignment form and the control dependence graph. ACM Transactions on Programming Languages and Systems 13(4), 451–490 (1991)CrossRefGoogle Scholar
  15. 15.
    Hampapuram, H., Yang, Y., Das, M.: Symbolic path simulation in path-sensitive dataflow analysis. In: Proceeding of PASTE, pp. 52–58. ACM Press (2005)Google Scholar
  16. 16.
    Cifuentes, C., Keynes, N., Li, L., Hawes, N., Valdiviezo, M., Browne, A., Zimmermann, J., Craik, A., Teoh, D., Hoermann, C.: Static deep error checking in large system applications using Parfait. In: Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, pp. 432–435. ACM (2011)Google Scholar
  17. 17.
    Holzmann, G.: The SPIN Model Checker: Primer and Reference Manual, 1st edn. Addison-Wesley Professional (2011)Google Scholar
  18. 18.
    Anderson, P.: The use and limitations of static-analysis tools to improve software quality. CrossTalk: The Journal of Defense Software Engineering, 18–21 (2008)Google Scholar
  19. 19.
    NIST: National Institute of Standards and Technology SAMATE Reference Dataset (SRD) project (January 2006),
  20. 20.
    Luecke, G.R., Coyle, J., Hoekstra, J., Kraeva, M., Li, Y., Taborskaia, O., Wang, Y.: A survey of systems for detecting serial run-time errors. Concurrency and Computation – Practice and Experience 18(15), 1885–1907 (2006)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Manuel Valdiviezo
    • 1
  • Cristina Cifuentes
    • 1
  • Padmanabhan Krishnan
    • 1
  1. 1.Oracle LabsBrisbaneAustralia

Personalised recommendations