Advertisement

Towards Vulnerability Discovery Using Staged Program Analysis

  • Bhargava ShastryEmail author
  • Fabian Yamaguchi
  • Konrad Rieck
  • Jean-Pierre Seifert
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9721)

Abstract

Eliminating vulnerabilities from low-level code is vital for securing software. Static analysis is a promising approach for discovering vulnerabilities since it can provide developers early feedback on the code they write. But, it presents multiple challenges not the least of which is understanding what makes a bug exploitable and conveying this information to the developer. In this paper, we present the design and implementation of a practical vulnerability assessment framework, called Open image in new window . Mélange performs data and control flow analysis to diagnose potential security bugs, and outputs well-formatted bug reports that help developers understand and fix security bugs. Based on the intuition that real-world vulnerabilities manifest themselves across multiple parts of a program, Mélange performs both local and global analyses in stages. To scale up to large programs, global analysis is demand-driven. Our prototype detects multiple vulnerability classes in C and C++ code including type confusion, and garbage memory reads. We have evaluated Mélange extensively. Our case studies show that Mélange scales up to large codebases such as Chromium, is easy-to-use, and most importantly, capable of discovering vulnerabilities in real-world code. Our findings indicate that static analysis is a viable reinforcement to the software testing tool set.

Keywords

Program analysis Vulnerability assessment LLVM 

Notes

Acknowledgments

This work was supported by the following grants: 317888 (project NEMESYS), 10043385 (project Enzevalos), and RI 2468/1-1 (project DEVIL). Authors would like to thank colleagues at SecT and Daniel Defreez for valuable feedback on a draft of this paper, and Janis Danisevskis for discussions on the C++ standard and occasional code reviews.

References

  1. 1.
  2. 2.
    Chromium Issue Tracker, Issue 411177. https://code.google.com/p/chromium/issues/detail?id=411177
  3. 3.
    Chromium Issue Tracker, Issue 436035. https://code.google.com/p/chromium/issues/detail?id=436035
  4. 4.
    Clang Static Analyzer. http://clang-analyzer.llvm.org/. Accessed 25 Mar 2015
  5. 5.
  6. 6.
  7. 7.
    PHP Bug Bounty Program. https://hackerone.com/php
  8. 8.
  9. 9.
  10. 10.
  11. 11.
    Report 73245: Type-confusion Vulnerability in SoapClient. https://hackerone.com/reports/73245
  12. 12.
  13. 13.
    The LLVM Compiler Infrastructure. http://llvm.org/
  14. 14.
  15. 15.
    Avgerinos, T., Cha, S.K., Hao, B.L.T., Brumley, D.: AEG: automatic exploit generation. In: NDSS, vol. 11, pp. 59–66 (2011)Google Scholar
  16. 16.
    Bacon, D.F., Sweeney, P.F.: Fast static analysis of c++ virtual function calls. In: Proceedings of the 11th ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 1996, pp. 324–341. ACM, New York (1996). http://doi.acm.org/10.1145/236337.236371
  17. 17.
    Ball, T., Rajamani, S.K.: The s lam project: debugging system software via static analysis. In: ACM SIGPLAN Notices, vol. 37, pp. 1–3. ACM (2002)Google Scholar
  18. 18.
    Cadar, C., Dunbar, D., Engler, D.R.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI, vol. 8, pp. 209–224 (2008)Google Scholar
  19. 19.
    Cifuentes, C., Scholz, B.: Parfait: designing a scalable bug checker. In: Proceedings of the 2008 Workshop on Static Analysis, pp. 4–11. ACM (2008)Google Scholar
  20. 20.
    Dean, J., Grove, D., Chambers, C.: Optimization of object-oriented programs using static class hierarchy analysis. In: Tokoro, M., Pareschi, R. (eds.) ECOOP 1995 Object-Oriented Programming. LNCS, vol. 952, pp. 77–101. Springer, Heidelberg (1995)Google Scholar
  21. 21.
    Engler, D., Chelf, B., Chou, A., Hallem, S.: Checking system rules using system-specific, programmer-written compiler extensions. In: Proceedings of the 4th Conference on Symposium on Operating System Design & Implementation, vol. 4, p. 1. USENIX Association (2000)Google Scholar
  22. 22.
    Foster, J.S., Johnson, R., Kodumal, J., Terauchi, T., Shankar, U., Talwar, K., Wagner, D., Aiken, A., Elsman, M., Harrelson, C.: CQUAL: a tool for adding type qualifiers to C (2003). https://www.cs.umd.edu/~jfoster/cqual/. Accessed 26 Mar 2015
  23. 23.
    GrammaTech: CodeSonar. http://www.grammatech.com/codesonar
  24. 24.
    Hallem, S., Chelf, B., Xie, Y., Engler, D.: A system and language for building system-specific, static analyses. In: Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation, PLDI 2002, pp. 69–82. ACM, New York (2002). http://doi.acm.org/10.1145/512529.512539
  25. 25.
    Heelan, S.: Vulnerability detection systems: think cyborg, not robot. IEEE Secur. Priv. 9(3), 74–77 (2011)CrossRefGoogle Scholar
  26. 26.
    Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Software verification with BLAST. In: Ball, T., Rajamani, S.K. (eds.) SPIN 2003. LNCS, vol. 2648, pp. 235–239. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  27. 27.
    Hewlett Packard: Fortify Static Code Analyzer. http://www8.hp.com/us/en/software-solutions/static-code-analysis-sast/
  28. 28.
    Howard, M., Lipner, S.: The Security Development Lifecycle. O’Reilly Media, Incorporated, Sebastopol (2009)Google Scholar
  29. 29.
    Johnson, S.: Lint, a C Program Checker. Bell Telephone Laboratories, Murray Hill (1977)Google Scholar
  30. 30.
    Knoop, J., Steffen, B.: Efficient and optimal bit vector data flow analyses: a uniform interprocedural framework. Inst. für Informatik und Praktische Mathematik (1993)Google Scholar
  31. 31.
    Kremenek, T., Engler, D.: Z-Ranking: using statistical analysis to counter the impact of static analysis approximations. In: Cousot, R. (ed.) SAS 2003. LNCS, vol. 2694, pp. 295–315. Springer, Heidelberg (2003). http://dl.acm.org/citation.cfm?id=1760267.1760289 CrossRefGoogle Scholar
  32. 32.
    Lattner, C., Adve, V.: Llvm: a compilation framework for lifelong program analysis & transformation. In: International Symposium on Code Generation and Optimization, 2004, CGO 2004, pp. 75–86. IEEE (2004)Google Scholar
  33. 33.
    Lee, B., Song, C., Kim, T., Lee, W.: Type casting verification: stopping an emerging attack vector. In: 24th USENIX Security Symposium (USENIX Security 15), Washington, D.C, August 2015, pp. 81–96. USENIX Association. https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/lee
  34. 34.
    Nethercote, N., Seward, J.: Valgrind: a framework for heavyweight dynamic binary instrumentation. In: ACM Sigplan Notices, vol. 42, pp. 89–100. ACM (2007)Google Scholar
  35. 35.
    NIST: SAMATE - Software Assurance Metrics And Tool Evaluation. http://samate.nist.gov/Main_Page.html
  36. 36.
    NIST: Test Suites, Software Assurance Reference Dataset. http://samate.nist.gov/SRD/testsuite.php
  37. 37.
    Reps, T., Horwitz, S., Sagiv, M.: Precise interprocedural dataflow analysis via graph reachability. In: Proceedings of the 22nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 49–61. ACM (1995)Google Scholar
  38. 38.
    Schwartz, E.J., Avgerinos, T., Brumley, D.: All you ever wanted to know about dynamic taint analysis and forward symbolic execution (but might have been afraid to ask). In: 2010 IEEE Symposium on Security and Privacy (SP), pp. 317–331. IEEE (2010)Google Scholar
  39. 39.
    Serebryany, K., Bruening, D., Potapenko, A., Vyukov, D.: Addresssanitizer: a fast address sanity checker. In: Proceedings of the 2012 USENIX Conference on Annual Technical Conference, USENIX ATC 2012, Berkeley, CA, USA, p. 28. USENIX Association (2012). http://dl.acm.org/citation.cfm?id=2342821.2342849
  40. 40.
    Stepanov, E., Serebryany, K.: Memorysanitizer: fast detector of uninitialized memory use in c++. In: 2015 IEEE/ACM International Symposium on Code Generation and Optimization (CGO), pp. 46–55. IEEE (2015)Google Scholar
  41. 41.
    Tsipenyuk, K., Chess, B., McGraw, G.: Seven pernicious kingdoms: a taxonomy of software security errors. IEEE Secur. Priv. 3(6), 81–84 (2005)CrossRefGoogle Scholar
  42. 42.
    Viega, J., Bloch, J., Kohno, Y., McGraw, G.: Its4: a static vulnerability scanner for c and c++ code. In: 2000 16th Annual Conference on Computer Security Applications, ACSAC 2000, pp. 257–267, December 2000Google Scholar
  43. 43.
    Wilkerson, D.: CQUAL++. https://daniel-wilkerson.appspot.com/oink/qual.html. Accessed 26 Mar 2015
  44. 44.
    Yamaguchi, F., Lottmann, M., Rieck, K.: Generalized vulnerability extrapolation using abstract syntax trees. In: Proceedings of the 28th Annual Computer Security Applications Conference, pp. 359–368. ACM (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Bhargava Shastry
    • 1
    Email author
  • Fabian Yamaguchi
    • 2
  • Konrad Rieck
    • 2
  • Jean-Pierre Seifert
    • 1
  1. 1.Security in TelecommunicationsTU BerlinBerlinGermany
  2. 2.Institute of System SecurityTU BraunschweigBraunschweigGermany

Personalised recommendations