Erlang Code Evolution Control

  • David Insa
  • Sergio Pérez
  • Josep Silva
  • Salvador TamaritEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10855)


In the software lifecycle, a program can evolve several times for different reasons such as the optimisation of a bottle-neck, the refactoring of an obscure function, etc. These code changes often involve several functions or modules, so it can be difficult to know whether the correct behaviour of the previous releases has been preserved in the new release. Most developers rely on a previously defined test suite to check this behaviour preservation. We propose here an alternative approach to automatically obtain a test suite that specifically focusses on comparing the old and new versions of the code. Our test case generation is directed by: a sophisticated combination of several already existing tools such as TypEr, CutEr, and PropEr; the choice of an expression of interest whose behaviour must be preserved; and the recording of the sequences of values this expression is evaluated to. All the presented work has been implemented in an open-source tool that is publicly available on GitHub.


Code evolution control Automated regression testing Tracing 


  1. 1.
    Bozó, I., Tóth, M., Simos, T.E., Psihoyios, G., Tsitouras, C., Anastassi, Z.: Selecting Erlang test cases using impact analysis. In: AIP Conference Proceedings, vol. 1389, pp. 802–805. AIP (2011)Google Scholar
  2. 2.
    Cronqvist, M.: redbug (2017).
  3. 3.
    Ericsson AB: dbg (2017).
  4. 4.
    Ericsson AB: Trace tool builder (2017).
  5. 5.
    Giantsios, A., Papaspyrou, N., Sagonas, K.: Concolic testing for functional languages. Sci. Comput. Program. 147, 109–134 (2017). Scholar
  6. 6.
    Insa, D., Pérez, S., Silva, J., Tamarit, S.: Erlang code evolution control (use cases). CoRR, abs/1802.03998 (2018)Google Scholar
  7. 7.
    Jumpertz, E.: Using QuickCheck and semantic analysis to verify correctness of Erlang refactoring transformations. Master’s thesis, Radboud University Nijmegen (2010)Google Scholar
  8. 8.
    Korel, B., Al-Yami, A.M.: Automated regression test generation. ACM SIGSOFT Softw. Eng. Notes 23(2), 143–152 (1998)CrossRefGoogle Scholar
  9. 9.
    Li, H., Thompson, S.: Testing Erlang refactorings with QuickCheck. In: Chitil, O., Horváth, Z., Zsók, V. (eds.) IFL 2007. LNCS, vol. 5083, pp. 19–36. Springer, Heidelberg (2008). Scholar
  10. 10.
    Lindahl, T., Sagonas, K.: TypEr: a type annotator of Erlang code. In: Sagonas, K., Armstrong, J. (eds.) Proceedings of the 2005 ACM SIGPLAN Workshop on Erlang, Tallinn, Estonia, 26–28 September 2005, pp. 17–25. ACM (2005).
  11. 11.
    Mongiovi, M.: Safira: a tool for evaluating behavior preservation. In: Proceedings of the ACM International Conference Companion on Object Oriented Programming Systems Languages and Applications Companion, pp. 213–214. ACM (2011)Google Scholar
  12. 12.
    Papadakis, M., Sagonas, K.: A PropEr integration of types and function specifications with property-based testing. In: Rikitake, K., Stenman, E. (eds.) Proceedings of the 10th ACM SIGPLAN Workshop on Erlang, Tokyo, Japan, 23 September 2011, pp. 39–50. ACM (2011).
  13. 13.
    Rajal, J.S., Sharma, S.: Article: a review on various techniques for regression testing and test case prioritization. Int. J. Comput. Appl. 116(16), 8–13 (2015)Google Scholar
  14. 14.
    Soares, G., Gheyi, R., Massoni, T.: Automated behavioral testing of refactoring engines. IEEE Trans. Softw. Eng. 39(2), 147–162 (2013)CrossRefGoogle Scholar
  15. 15.
    Taylor, R., Hall, M., Bogdanov, K., Derrick, J.: Using behaviour inference to optimise regression test sets. In: Nielsen, B., Weise, C. (eds.) ICTSS 2012. LNCS, vol. 7641, pp. 184–199. Springer, Heidelberg (2012). Scholar
  16. 16.
    Till, A.: erlyberly (2017).
  17. 17.
    Bozó, I., Tóth, M., Horváth, Z.: Reduction of regression tests for Erlang based on impact analysis (2013)Google Scholar
  18. 18.
    Tóth, M., et al.: Impact analysis of Erlang programs using behaviour dependency graphs. In: Horváth, Z., Plasmeijer, R., Zsók, V. (eds.) CEFP 2009. LNCS, vol. 6299, pp. 372–390. Springer, Heidelberg (2010). Scholar
  19. 19.
    Yoo, S., Harman, M.: Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab. 22(2), 67–120 (2012)CrossRefGoogle Scholar
  20. 20.
    Yu, K., Lin, M., Chen, J., Zhang, X.: Practical isolation of failure-inducing changes for debugging regression faults. In: Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering, pp. 20–29. ACM (2012)Google Scholar
  21. 21.
    Zhang, L., Zhang, L., Khurshid, S.: Injecting mechanical faults to localize developer faults for evolving software. In: ACM SIGPLAN Notices, vol. 48, pp. 765–784. ACM (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • David Insa
    • 1
  • Sergio Pérez
    • 1
  • Josep Silva
    • 1
  • Salvador Tamarit
    • 1
    Email author
  1. 1.Universitat Politècnica de ValènciaValenciaSpain

Personalised recommendations