Abstract
In the field of search based software engineering, evolutionary testing is a very popular domain in which test cases are automatically generated for a given piece of code using evolutionary algorithms. The techniques used in this domain usually are hard to compare since there is no standard testbed. In this paper we propose an automatic program generator to solve this situation. The program generator is able to create Java programs with the desired features. In addition, we can ensure that all the branches in the programs are reachable, i.e. a 100% branch coverage is always possible. Thanks to this feature the research community can test and enhance their algorithms until a total coverage is achieved. The potential of the program generator is illustrated with an experimental study on a benchmark of 800 generated programs. We highlight the correlations between some static measures computed on the program and the code coverage when an evolutionary test case generator is used. In particular, we compare three techniques as the search engine for the test case generator: an Evolutionary Strategy, a Genetic Algorithm and a Random Search.
Chapter PDF
Similar content being viewed by others
References
Bäck, T., Fogel, D.B., Michalewicz, Z.: Handbook of Evolutionary Computation. Oxford University Press, New York (1997)
Basili, V., Perricone, B.: Software errors and complexity: an empirical investigation. ACM Commun. 27(1), 42–52 (1984)
Beizer, B.: Software testing techniques, 2nd edn. Van Nostrand Reinhold Co., New York (1990)
Boehm, B., Abts, C., Brown, A.W., Chulani, S., Clark, B.K., Horowitz, E., Madachy, R., Reifer, D.J., Steece, B.: Software cost estimation with COCOMO II. Prentice-Hall, Englewood Cliffs (2000)
Harman, M.: The current state and future of search based software engineering. In: Proceedings of ICSE/FOSE 2007, Minneapolis, Minnesota, USA, May 20-26, pp. 342–357 (2007)
Harman, M., Jones, B.F.: Search-based software engineering. Information & Software Technology 43(14), 833–839 (2001)
Khoshgoftaar, T., Munson, J.: Predicting software development errors using software complexity metrics. Jnl. on Selected Areas in Communications (1990)
McCabe, T.J.: A complexity measure. IEEE Trans. on Software Engineering 2(4), 308–320 (1976)
McMinn, P.: Search-based software test data generation: a survey. Software Testing, Verification and Reliability 14(2), 105–156 (2004)
Miller, W., Spooner, D.L.: Automatic generation of floating-point test data. IEEE Trans. Software Eng. 2(3), 223–226 (1976)
Piwarski, P.: A nesting level complexity measure. SIGPLAN 17(9), 44–50 (1982)
Rechenberg, I.: Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Fromman-Holzboog Verlag, Stuttgart (1973)
Rudolph, G.: Evolutionary Computation 1. Basic Algorithms and Operators. Evolution Strategies, vol. 1, ch. 9, pp. 81–88. IOP Publishing Lt. (2000)
Samoladas, I., Gousios, G., Spinellis, D., Stamelos, I.: The SQO-OSS Quality Model. In: Open Source Development, Communities and Quality, vol. 275, pp. 237–248 (2008)
Wegener, J., Baresel, A., Sthamer, H.: Evolutionary test environment for automatic structural testing. Information and Software Technology 43(14), 841–854 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 IFIP International Federation for Information Processing
About this paper
Cite this paper
Ferrer, J., Chicano, F., Alba, E. (2011). Benchmark Generator for Software Testers. In: Iliadis, L., Maglogiannis, I., Papadopoulos, H. (eds) Artificial Intelligence Applications and Innovations. EANN AIAI 2011 2011. IFIP Advances in Information and Communication Technology, vol 364. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23960-1_45
Download citation
DOI: https://doi.org/10.1007/978-3-642-23960-1_45
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-23959-5
Online ISBN: 978-3-642-23960-1
eBook Packages: Computer ScienceComputer Science (R0)