Skip to main content

Detection of Security Vulnerabilities in C Code Using Runtime Verification: An Experience Report

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 10889))

Abstract

Despite significant progress made by runtime verification tools in recent years, memory errors remain one of the primary threats to software security. The present work is aimed at providing an objective up-to-date experience study on the capacity of modern online runtime verification tools to automatically detect security flaws in C programs. The reported experiments are performed using three advanced runtime verification tools (E-ACSL, Google Sanitizer and RV-Match) over 700 test cases belonging to SARD-100 test suite of the SAMATE project and Toyota ITC Benchmark, a publicly available benchmarking suite developed at the Toyota InfoTechnology Center. SARD-100 specifically targets security flaws identified by the Common Weakness Enumeration (CWE) taxonomy, while Toyota ITC Benchmark addresses more general memory defects, as well as numerical and concurrency issues. We compare tools based on different approaches – a formal semantic based tool, a formal specification verifier and a memory debugger – and evaluate their cumulative detection capacity.

The results of the experiments indicate that the selected tools cumilatively detected 84% of all seeded defects. Although for several categories of errors detection rates are higher, we observed that applying several tools is beneficial for uncovering certain issues. For instance, in detecting concurrency issues of the Toyota ITC Benchmark, the highest per-tool result was 73%, whereas cumulative detection ratio of all three tools used together was 93%.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Which dynamically analyzes a program run on the fly, unlike offline monitoring based on previously recorded execution traces.

  2. 2.

    See https://goo.gl/S4NF5m.

  3. 3.

    More precisely, in some cases memory corruption errors caused by such violations are detectable by binary analysis tools but only after they are disconnected from the source code error that caused them.

  4. 4.

    At the time of this writing (March, 2018) E-ACSL-0.9 is not available publicly yet and was obtained from the developers of the tool. E-ACSL-0.9 is scheduled to be released in May, 2018.

References

  1. Akritidis, P., Costa, M., Castro, M., Hand, S.: Baggy bounds checking: an efficient and backwards-compatible defense against out-of-bounds errors. In: Proceedings of the USENIX Security Symposium, pp. 51–66. USENIX Association, August 2009

    Google Scholar 

  2. Bartocci, E., Bonakdarpour, B., Falcone, Y.: First international competition on software for runtime verification. In: Bonakdarpour, B., Smolka, S.A. (eds.) RV 2014. LNCS, vol. 8734, pp. 1–9. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11164-3_1

    Chapter  Google Scholar 

  3. Black, P.E., Kass, M., Koo, M., Fong, E.: Source code security analysis tool functional specification version 1.1. Technical report 500–268 v1.1, Information Technology Laboratory (2011)

    Google Scholar 

  4. Bruening, D., Zhao, Q.: Practical memory checking with Dr. Memory. In: Proceedings of the Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO 2011, pp. 213–223. IEEE Computer Society, Washington, DC (2011)

    Google Scholar 

  5. ISO/IEC 9899:2011. https://www.iso.org/standard/57853.html

  6. Texas Instruments Center: Specification of test bench (2015). https://github.com/regehr/itc-benchmarks

  7. Christey, S.: 2011 CWE/SANS top 25 most dangerous software errors. Technical report 1.0.3, The MITRE Corporation, September 2011. http://www.mitre.org

  8. Common Weakness Enumeration: A community developed dictionary of software weakness types. http://cwe.mitre.org

  9. Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Symposium on Principles of Programming Languages (POPL 1977), pp. 238–252 (1977)

    Google Scholar 

  10. Delahaye, M., Kosmatov, N., Signoles, J.: Common specification language for static and dynamic analysis of C programs. In: Proceedings of the ACM Symposium on Applied Computing, pp. 1230–1235. ACM, March 2013

    Google Scholar 

  11. Delaitre, A.: Test Suite #100: C test suite for source code analyzer v2 - vulnerable (2015). https://samate.nist.gov/SRD/view.php?tsID=100

  12. Dhurjati, D., Adve, V.S.: Backwards-compatible array bounds checking for C with very low overhead. In: Proceedings of the International Conference on Software Engineering, pp. 162–171. ACM, May 2006

    Google Scholar 

  13. Dijkstra, E.W.: Guarded commands, nondeterminacy and formal derivation of programs. Commun. ACM 18(8), 453–457 (1975)

    Article  MathSciNet  Google Scholar 

  14. Eigler, F.C.: Mudflap: pointer use checking for C/C++. In: Proceedings of the GCC Developers Summit, pp. 57–70, May 2003

    Google Scholar 

  15. Ellison, C., Rosu, G.: An executable formal semantics of C with applications. SIGPLAN Not. 47(1), 533–544 (2012)

    Article  Google Scholar 

  16. Emerson, E.A., Clarke, E.M.: Characterizing correctness properties of parallel programs using fixpoints. In: de Bakker, J., van Leeuwen, J. (eds.) ICALP 1980. LNCS, vol. 85, pp. 169–181. Springer, Heidelberg (1980). https://doi.org/10.1007/3-540-10003-2_69

    Chapter  Google Scholar 

  17. Falcone, Y., Havelund, K., Reger, G.: A tutorial on runtime verification. In: Engineering Dependable Software Systems, pp. 141–175 (2013)

    Google Scholar 

  18. Falcone, Y., Ničković, D., Reger, G., Thoma, D.: Second international competition on runtime verification. In: Bartocci, E., Majumdar, R. (eds.) RV 2015. LNCS, vol. 9333, pp. 405–422. Springer, Cham (2015). https://doi.org/10.1007/s10009-017-0454-5

    Chapter  Google Scholar 

  19. Guth, D., Hathhorn, C., Saxena, M., Roşu, G.: RV-match: practical semantics-based program analysis. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9779, pp. 447–453. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41528-4_24

    Chapter  Google Scholar 

  20. Hastings, R., Joyce, B.: Purify: fast detection of memory leaks and access errors. In: Proceedings of the Winter USENIX Conference, pp. 125–136, January 1992

    Google Scholar 

  21. ISO/IEC 9899:1999, 1430 Broadway, New York, NY 10018, USA: Programming languages - C, March 2013. www.open-std.org/jtc1/sc22/wg14/www/standards

  22. Jim, T., Morrisett, J.G., Grossman, D., Hicks, M.W., Cheney, J., Wang, Y.: Cyclone: a safe dialect of C. In: Proceedings of the General Track: 2002 USENIX Annual Technical Conference, pp. 275–288. USENIX, June 2002

    Google Scholar 

  23. Jones, R.W.M., Kelly, P.H.J.: Backwards-compatible bounds checking for arrays and pointers in C programs. In: Proceedings of the International Workshop on Automatic Debugging, pp. 13–26. Linköping University Electronic Press, September 1997

    Google Scholar 

  24. Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: A software analysis perspective. Formal Aspects Comput. 27(3), 573–609 (2015)

    Article  MathSciNet  Google Scholar 

  25. Matsakis, N.D., Klock II, F.S.: The Rust language. ADA Lett. 34(3), 103–104 (2014)

    Article  Google Scholar 

  26. Queille, J.P., Sifakis, J.: Specification and verification of concurrent systems in CESAR. In: Dezani-Ciancaglini, M., Montanari, U. (eds.) Programming 1982. LNCS, vol. 137, pp. 337–351. Springer, Heidelberg (1982). https://doi.org/10.1007/3-540-11494-7_22

    Chapter  Google Scholar 

  27. Rosu, G., Serbanuta, T.: An overview of the K semantic framework. J. Logic Algebraic Program. 79(6), 397–434 (2010)

    Article  MathSciNet  Google Scholar 

  28. Ruwase, O., Lam, M.S.: A practical dynamic buffer overflow detector. In: Proceedings of the Network and Distributed System Security Symposium. The Internet Society, December 2004

    Google Scholar 

  29. Serebryany, K., Bruening, D., Potapenko, A., Vyukov, D.: AddressSanitizer: a fast address sanity checker. In: Proceedings of the USENIX Annual Technical Conference, pp. 309–319. USENIX Association, June 2012

    Google Scholar 

  30. Serebryany, K., Potapenko, A., Iskhodzhanov, T., Vyukov, D.: Dynamic race detection with LLVM compiler. In: Khurshid, S., Sen, K. (eds.) RV 2011. LNCS, vol. 7186, pp. 110–114. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29860-8_9

    Chapter  Google Scholar 

  31. Seward, J., Nethercote, N.: Using Valgrind to detect undefined value errors with bit-precision. In: Proceedings of the USENIX Annual Technical Conference, pp. 17–30. USENIX (2005)

    Google Scholar 

  32. SGCheck: an experimental stack and global array overrun detector. http://valgrind.org/docs/manual/sg-manual.html

  33. Vulnerability summary for CVE-2014-6271. National Institute of Standards and Technilogy: National Vulnerability Database, September 2014. http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6271

  34. Shiraishi, S., Mohan, V., Marimuthu, H.: Test suites for benchmarks of static analysis tools. In: IEEE International Symposium on Software Reliability Engineering Workshops, pp. 12–15. IEEE (2015)

    Google Scholar 

  35. Simpson, M.S., Barua, R.: MemSafe: ensuring the spatial and temporal memory safety of C at runtime. Softw.: Pract. Exp. 43(1), 93–128 (2013)

    Google Scholar 

  36. Standard Performance Evaluation Corporation: SPEC CPU (2006). http://www.spec.org/benchmarks.html

  37. Stepanov, E., Serebryany, K.: MemorySanitizer: fast detector of uninitialized memory use in C++. In: Proceedings of the Annual IEEE/ACM International Symposium on Code Generation and Optimization, pp. 46–55. IEEE Computer Society, February 2015

    Google Scholar 

  38. Undefinedbehaviorsanitizer (2017). https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html

  39. Projects using Valgrind (2017). http://valgrind.org/gallery/users.html

  40. van der Veen, V., dutt-Sharma, N., Cavallaro, L., Bos, H.: Memory errors: the past, the present, and the future. In: Balzarotti, D., Stolfo, S.J., Cova, M. (eds.) RAID 2012. LNCS, vol. 7462, pp. 86–106. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33338-5_5

    Chapter  Google Scholar 

  41. Vorobyov, K., Kosmatov, N., Signoles, J., Jakobsson, A.: Runtime detection of temporal memory errors. In: Lahiri, S., Reger, G. (eds.) RV 2017. LNCS, vol. 10548, pp. 294–311. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67531-2_18

    Chapter  Google Scholar 

  42. Vorobyov, K., Signoles, J., Kosmatov, N.: Shadow state encoding for efficient monitoring of block-level properties. In: Proceedings of the International Symposium on Memory Management, pp. 47–58. ACM, June 2017

    Google Scholar 

Download references

Acknowledgments

The authors thank the Frama-C team for providing the tools and support. Many thanks to the anonymous referees for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nikolai Kosmatov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vorobyov, K., Kosmatov, N., Signoles, J. (2018). Detection of Security Vulnerabilities in C Code Using Runtime Verification: An Experience Report. In: Dubois, C., Wolff, B. (eds) Tests and Proofs. TAP 2018. Lecture Notes in Computer Science(), vol 10889. Springer, Cham. https://doi.org/10.1007/978-3-319-92994-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-92994-1_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-92993-4

  • Online ISBN: 978-3-319-92994-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics