Advertisement

Abstract Semantic Differencing for Numerical Programs

  • Nimrod Partush
  • Eran Yahav
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7935)

Abstract

We address the problem of computing semantic differences between a program and a patched version of the program. Our goal is to obtain a precise characterization of the difference between program versions, or establish their equivalence when no difference exists.

We focus on computing semantic differences in numerical programs where the values of variables have no a-priori bounds, and use abstract interpretation to compute an over-approximation of program differences. Computing differences and establishing equivalence under abstraction requires abstracting relationships between variables in the two programs. Towards that end, we first construct a correlating program in which these relationships can be tracked, and then use a correlating abstract domain to compute a sound approximation of these relationships. To better establish equivalence between correlated variables and precisely capture differences, our domain has to represent non-convex information using a partially-disjunctive abstract domain. To balance precision and cost of this representation, our domain over-approximates numerical information while preserving equivalence between correlated variables by dynamically partitioning the disjunctive state according to equivalence criteria.

We have implemented our approach in a tool called DIZY, and applied it to a number of real-world examples, including programs from the GNU core utilities, Mozilla Firefox and the Linux Kernel. Our evaluation shows that DIZY often manages to establish equivalence, describes precise approximation of semantic differences when difference exists, and reports only a few false differences.

Keywords

Equivalence Check Abstract Interpretation Symbolic Execution Concrete State Abstract Domain 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amit, D., Rinetzky, N., Reps, T., Sagiv, M., Yahav, E.: Comparison under abstraction for verifying linearizability. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 477–490. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  2. 2.
    Bagnara, R., Hill, P.M., Zaffanella, E.: Widening operators for powerset domains. Int. J. Softw. Tools Technol. Transf. 8(4), 449–466 (2006)CrossRefGoogle Scholar
  3. 3.
    Brumley, D., Poosankam, P., Song, D., Zheng, J.: Automatic patch-based exploit generation is possible: Techniques and implications. In: S&P 2008, pp. 143–157 (2008)Google Scholar
  4. 4.
    Cadar, C., Dunbar, D., Engler, D.R.: Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI, pp. 209–224 (2008)Google Scholar
  5. 5.
    Chaki, S., Gurfinkel, A., Strichman, O.: Regression verification for multi-threaded programs. In: Kuncak, V., Rybalchenko, A. (eds.) VMCAI 2012. LNCS, vol. 7148, pp. 119–135. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  6. 6.
    Clarke, E.M., Kroening, D.: Hardware verification using ansi-c programs as a reference. In: ASP-DAC, pp. 308–311 (2003)Google Scholar
  7. 7.
    Cousot, P., Cousot, R.: Abstract interpretation: A unified lattice model for static analysis of programs by construction of approximation of fixed points. In: POPL (1977)Google Scholar
  8. 8.
    Cousot, P., Halbwachs, N.: Automatic discovery of linear restraints among variables of a program. In: POPL 1978, pp. 84–97 (1978)Google Scholar
  9. 9.
    Godlin, B., Strichman, O.: Regression verification. In: DAC, pp. 466–471 (2009)Google Scholar
  10. 10.
    Hoare, C.A.R.: An axiomatic basis for computer programming. Commun. ACM 12(10), 576–580 (1969)zbMATHCrossRefGoogle Scholar
  11. 11.
    Horwitz, S.: Identifying the semantic and textual differences between two versions of a program. In: PLDI 1990, pp. 234–245 (1990)Google Scholar
  12. 12.
    Horwitz, S., Prins, J., Reps, T.: Integrating noninterfering versions of programs. ACM Trans. Program. Lang. Syst. 11(3)Google Scholar
  13. 13.
    Hunt, J.W., McIlroy, M.D.: An algorithm for differential file comparison. Tech. rep., Bell Laboratories (1975)Google Scholar
  14. 14.
    Jackson, D., Ladd, D.A.: Semantic diff: A tool for summarizing the effects of modifications. In: ICSM, pp. 243–252 (1994)Google Scholar
  15. 15.
    Jin, W., Orso, A., Xie, T.: BERT: a tool for behavioral regression testing. In: FSE 2010, pp. 361–362. ACM (2010)Google Scholar
  16. 16.
    Kawaguchi, M., Lahiri, S. K., and Rebelo, H. Conditional equivalence. Tech. rep., MSR (2010)Google Scholar
  17. 17.
    Kuehlmann, A., Krohm, F.: Equivalence checking using cuts and heaps. In: DAC, pp. 263–268 (1997)Google Scholar
  18. 18.
    Miné, A.: The octagon abstract domain. Higher Order Symbol. Comput. 19, 31–100 (2006)zbMATHCrossRefGoogle Scholar
  19. 19.
    Mishchenko, A., Chatterjee, S., Brayton, R.K., Eén, N.: Improvements to combinational equivalence checking. In: ICCAD, pp. 836–843 (2006)Google Scholar
  20. 20.
    Necula, G.C.: Translation validation for an optimizing compiler, pp. 83–95Google Scholar
  21. 21.
    Nethercote, N., Seward, J.: Valgrind: A framework for heavyweight dynamic binary instrumentation. In: PLDI 2007 (2007)Google Scholar
  22. 22.
    Person, S., Dwyer, M.B., Elbaum, S.G., Pasareanu, C.S.: Differential symbolic execution. In: FSE 2008 (2008)Google Scholar
  23. 23.
    Pnueli, A., Siegel, M., Singerman, E.: Translation validation. In: Steffen, B. (ed.) TACAS 1998. LNCS, vol. 1384, p. 151. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  24. 24.
    Ramos, D.A., Engler, D.R.: Practical, low-effort equivalence verification of real code. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 669–685. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  25. 25.
    Rival, X., Mauborgne, L.: The trace partitioning abstract domain. ACM Trans. Program. Lang. Syst. 29(5) (August 2007)Google Scholar
  26. 26.
    Song, Y., Zhang, Y., Sun, Y.: Automatic vulnerability locating in binary patches. In: CIS 2009,Google Scholar
  27. 27.
    Terauchi, T., Aiken, A.: Secure information flow as a safety problem. In: Hankin, C., Siveroni, I. (eds.) SAS 2005. LNCS, vol. 3672, pp. 352–367. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  28. 28.
    Verdoolaege, S., Janssens, G., Bruynooghe, M.: Equivalence checking of static affine programs using widening to handle recurrences. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 599–613. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  29. 29.
    Xin, B., Sumner, W.N., Zhang, X.: Efficient program execution indexing. In: Proceedings of the 2008 ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2008, pp. 238–248 (2008)Google Scholar
  30. 30.
    Zuck, L., Pnueli, A., Fang, Y., Goldberg, B., Hu, Y.: Translation and run-time validation of optimized code. Electr. Notes Theor. Comput. Sci. 70(4) (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Nimrod Partush
    • 1
  • Eran Yahav
    • 1
  1. 1.TechnionIsrael

Personalised recommendations