Enhancing POI Testing Through the Use of Additional Information

  • Sergio PérezEmail author
  • Salvador Tamarit
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11285)


Recently, a new approach to perform regression testing has been defined: the point of interest (POI) testing. A POI, in this context, is any expression of a program. The approach receives as input a set of relations between POIs from a version of a program and POIs from another version, and also a sequence of entry points, i.e. test cases. Then, a program instrumentation, an input test case generation and different comparison functions are used to obtain the final report which indicates whether the alternative version of the program behaves as expected, e.g. it produces the same outputs or it uses less CPU/memory. In this paper, we present a method to improve POI testing by including additional context information for a certain type of POIs. Concretely, we use this method to obtain an enhanced tracing of calls. Additionally, it enables new comparison modes and a categorization of unexpected behaviours.


Code evolution control Automated regression testing Call traces Tracing 


  1. 1.
    Erlang (1986).
  2. 2.
    Anand, S., et al.: An orchestrated survey of methodologies for automated software test case generation. J. Syst. Softw. 86(8), 1978–2001 (2013)CrossRefGoogle Scholar
  3. 3.
    Caballero, R., Martin-Martin, E., Riesco, A., Tamarit, S.: EDD: a declarative debugger for sequential erlang programs. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 581–586. Springer, Heidelberg (2014). Scholar
  4. 4.
    M. Cronqvist. redbug (2017).
  5. 5.
    Danglot, B., Vera-Perez, O., Yu, Z., Monperrus, M., Baudry, B.: The emerging field of test amplification: a survey. CoRR, abs/1705.10692 (2017)Google Scholar
  6. 6.
    Engström, E., Runeson, P.: A qualitative survey of regression testing practices. In: Ali Babar, M., Vierimaa, M., Oivo, M. (eds.) PROFES 2010. LNCS, vol. 6156, pp. 3–16. Springer, Heidelberg (2010). Scholar
  7. 7.
    Ericsson AB. dbg (2017).
  8. 8.
    Ericsson AB. Trace tool builder (2017).
  9. 9.
    Insa, D., Pérez, S., Silva, J., Tamarit, S.: Behaviour preservation across code versions in Erlang. Sci. Program. 2018, 1–42 (2018)Google Scholar
  10. 10.
    Jumpertz, E.: Using QuickCheck and semantic analysis to verify correctness of Erlang refactoring transformations. Master’s thesis, Radboud University Nijmegen (2010)Google Scholar
  11. 11.
    Reps, T., Ball, T., Das, M., Larus, J.: The use of program profiling for software maintenance with applications to the year 2000 problem. In: Jazayeri, M., Schauer, H. (eds.) ESEC/SIGSOFT FSE -1997. LNCS, vol. 1301, pp. 432–449. Springer, Heidelberg (1997). Scholar
  12. 12.
    Shapiro, E.Y.: Algorithmic Program Debugging. MIT Press, Cambridge (1982)zbMATHGoogle Scholar
  13. 13.
    Taylor, R., Hall, M., Bogdanov, K., Derrick, J.: Using behaviour inference to optimise regression test sets. In: Nielsen, B., Weise, C. (eds.) ICTSS 2012. LNCS, vol. 7641, pp. 184–199. Springer, Heidelberg (2012). Scholar
  14. 14.
    Till, A.: erlyberly (2017).
  15. 15.
    Tóth, I.B.M., Horváth, Z.: Reduction of regression tests for Erlang based on impact analysis (2013)Google Scholar
  16. 16.
    Xie, T., Notkin, D.: Checking inside the black box: regression testing by comparing value spectra. IEEE Trans. Softw. Eng. 31(10), 869–883 (2005)CrossRefGoogle Scholar
  17. 17.
    Yoo, S., Harman, M.: Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab. 22(2), 67–120 (2012)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Departament de Sistemes Informàtics i ComputacióUniversitat Politècnica de ValènciaValènciaSpain

Personalised recommendations