Advertisement

Liveness-Driven Random Program Generation

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10855)

Abstract

Randomly generated programs are popular for testing compilers and program analysis tools, with hundreds of bugs in real-world C compilers found by random testing. However, existing random program generators may generate large amounts of dead code (computations whose result is never used). This leaves relatively little code to exercise a target compiler’s more complex optimizations.

To address this shortcoming, we introduce liveness-driven random program generation. In this approach the random program is constructed bottom-up, guided by a simultaneous structural data-flow analysis to ensure that the generator never generates dead code.

The algorithm is implemented as a plugin for the Frama-C framework. We evaluate it in comparison to Csmith, the standard random C program generator. Our tool generates programs that compile to more machine code with a more complex instruction mix.

Keywords

Code generation Random testing Data-flow analysis Program optimization 

Notes

Acknowledgments

The author would like to thank the anonymous reviewers, John Regehr, and Gabriel Scherer for insightful comments on earlier versions of this paper. This research was partially supported by ITEA 3 project no. 14014, ASSUME.

References

  1. 1.
    Allen, F.E.: Control flow analysis. SIGPLAN Not. 5(7), 1–19 (1970). http://doi.acm.org/10.1145/390013.808479CrossRefGoogle Scholar
  2. 2.
    Barany, G.: Finding missed compiler optimizations by differential testing. In: 27th International Conference on Compiler Construction (2018).  https://doi.org/10.1145/3178372.3179521
  3. 3.
    Cocke, J.: Global common subexpression elimination. SIGPLAN Not. 5(7), 20–24 (1970). http://doi.acm.org/10.1145/390013.808480CrossRefGoogle Scholar
  4. 4.
    Dave, M.A.: Compiler verification: a bibliography. SIGSOFT Softw. Eng. Notes 28(6) (2003). http://doi.acm.org/10.1145/966221.966235CrossRefGoogle Scholar
  5. 5.
    Eide, E., Regehr, J.: Volatiles are miscompiled, and what to do about it. In: EMSOFT 2008. ACM (2008). http://doi.acm.org/10.1145/1450058.1450093
  6. 6.
    Graham, S.L., Wegman, M.: A fast and usually linear algorithm for global flow analysis. J. ACM 23(1), 172–202 (1976)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: a software analysis perspective. Form. Asp. Comp. 27(3), 573–609 (2015)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Le, V., Afshari, M., Su, Z.: Compiler validation via equivalence modulo inputs. In: PLDI 2014. ACM (2014). http://doi.acm.org/10.1145/2594291.2594334CrossRefGoogle Scholar
  9. 9.
    Leroy, X.: Formal verification of a realistic compiler. Commun. ACM 52(7), 107–115 (2009). http://doi.acm.org/10.1145/1538788.1538814CrossRefGoogle Scholar
  10. 10.
    Lidbury, C., Lascu, A., Chong, N., Donaldson, A.F.: Many-core compiler fuzzing. In: PLDI 2015, pp. 65–76. ACM (2015)CrossRefGoogle Scholar
  11. 11.
    Midtgaard, J., Justesen, M.N., Kasting, P., Nielson, F., Nielson, H.R.: Effect-driven QuickChecking of compilers. Proc. ACM Program. Lang. 1(ICFP) (2017). http://doi.acm.org/10.1145/3110259CrossRefGoogle Scholar
  12. 12.
    Nagai, E., Hashimoto, A., Ishiura, N.: Reinforcing random testing of arithmetic optimization of C compilers by scaling up size and number of expressions. IPSJ Trans. Syst. LSI Des. Methodol. 7, 91–100 (2014)CrossRefGoogle Scholar
  13. 13.
    Nielson, F., Nielson, H.R., Hankin, C.: Principles of Program Analysis. Springer, Heidelberg (1999).  https://doi.org/10.1007/978-3-662-03811-6CrossRefMATHGoogle Scholar
  14. 14.
    Pałka, M.H., Claessen, K., Russo, A., Hughes, J.: Testing an optimising compiler by generating random lambda terms. In: 6th International Workshop on Automation of Software Test, AST 2011 (2011). http://doi.acm.org/10.1145/1982595.1982615
  15. 15.
    Perennial, Inc.: ACVS ANSI/ISO/FIPS-160 C validation suite. http://www.peren.com/pages/acvs_set.htm
  16. 16.
    Plum Hall, Inc.: The Plum Hall validation suite for C. http://www.plumhall.com/stec.html
  17. 17.
  18. 18.
    Steffen, B., Isberner, M., Naujokat, S., Margaria, T., Geske, M.: Property-driven benchmark generation: synthesizing programs of realistic structure. Int. J. Softw. Tools Technol. Transfer 16(5), 465–479 (2014)CrossRefGoogle Scholar
  19. 19.
    Turner, B.: Random C program generator (2007). https://sites.google.com/site/brturn2/randomcprogramgenerator
  20. 20.
    Yang, X., Chen, Y., Eide, E., Regehr, J.: Finding and understanding bugs in C compilers. In: PLDI 2011, pp. 283–294. ACM (2011)CrossRefGoogle Scholar
  21. 21.
    Zhao, C., Xue, Y., Tao, Q., Guo, L., Wang, Z.: Automated test program generation for an industrial optimizing compiler. In: ICSE Workshop on Automation of Software Test (2009)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.InriaParisFrance

Personalised recommendations