Performance Models for Evolutionary Program Induction Algorithms Based on Problem Difficulty Indicators

  • Mario Graff
  • Riccardo Poli
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6621)


Most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. In this paper, two models of evolutionary program-induction algorithms (EPAs) are proposed which overcome this limitation. We test our approach with two important classes of problems — symbolic regression and Boolean function induction — and a variety of EPAs including: different versions of genetic programming, gene expression programing, stochastic iterated hill climbing in program space and one version of cartesian genetic programming. We compare the proposed models against a practical model of EPAs we previously developed and find that in most cases the new models are simpler and produce better predictions. A great deal can also be learnt about an EPA via a simple inspection of our new models. E.g., it is possible to infer which characteristics make a problem difficult or easy for the EPA.


Evolutionary Program-induction Algorithms Genetic Programming Performance Prediction Hardness Measures 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Akers Jr., S.B.: On a theory of boolean functions. Journal of the Society for Industrial and Applied Mathematics 7(4), 487–498 (1959)CrossRefzbMATHGoogle Scholar
  2. 2.
    Borenstein, Y., Poli, R.: Information landscapes. In: Beyer, H.-G., O’Reilly, U.-M. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2005), Washington DC, USA, pp. 1515–1522. ACM, New York (2005)CrossRefGoogle Scholar
  3. 3.
    Ferreira, C.: Gene expression programming: A new adaptive algorithm for solving problems. Complex Systems 13(2), 87–129 (2001)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Franco, L.: Generalization ability of boolean functions implemented in feedforward neural networks. Neurocomputing 70, 351–361 (2006)CrossRefGoogle Scholar
  5. 5.
    Graff, M., Poli, R.: Automatic creation of taxonomies of genetic programming systems. In: Vanneschi, L., Gustafson, S., Moraglio, A., De Falco, I., Ebner, M. (eds.) EuroGP 2009. LNCS, vol. 5481, pp. 145–158. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  6. 6.
    Graff, M., Poli, R.: Practical performance models of algorithms in evolutionary program induction and other domains. Artif. Intell. 174(15), 1254–1276 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Graff, M., Poli, R.: Practical model of genetic programming’s performance on rational symbolic regression problems. In: O’Neill, M., Vanneschi, L., Gustafson, S., Esparcia Alcázar, A.I., De Falco, I., Della Cioppa, A., Tarantino, E. (eds.) EuroGP 2008. LNCS, vol. 4971, pp. 122–132. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  8. 8.
    Hutter, F., Hamadi, Y., Hoos, H.H., Leyton-Brown, K.: Performance prediction and automated tuning of randomized and parametric algorithms. In: Benhamou, F. (ed.) CP 2006. LNCS, vol. 4204, pp. 213–228. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Jones, T., Forrest, S.: Fitness distance correlation as a measure of problem difficulty for genetic algorithms. In: Eshelman, L.J. (ed.) ICGA, pp. 184–192. Morgan Kaufmann, San Francisco (1995)Google Scholar
  10. 10.
    Koza, J.R.: Genetic Programming: On the Programming of Computers by Natural Selection. MIT Press, Cambridge (1992)zbMATHGoogle Scholar
  11. 11.
    Langdon, W.B., Poli, R.: Foundations of Genetic Programming. Springer, Heidelberg (2002)CrossRefzbMATHGoogle Scholar
  12. 12.
    Miller, J.F., Thomson, P.: Cartesian genetic programming. In: Poli, R., Banzhaf, W., Langdon, W.B., Miller, J., Nordin, P., Fogarty, T.C. (eds.) EuroGP 2000. LNCS, vol. 1802, pp. 121–132. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  13. 13.
    Olver, P.J., Shakiban, C.: Applied Linear Algebra. Prentice-Hall, Englewood Cliffs (2006)zbMATHGoogle Scholar
  14. 14.
    O’Neill, M., Ryan, C.: Grammatical evolution. IEEE Transactions on Evolutionary Computation 5(4), 349–358 (2001)CrossRefGoogle Scholar
  15. 15.
    O’Reilly, U.-M., Oppacher, F.: Program search with a hierarchical variable length representation: Genetic programming, simulated annealing and hill climbing. In: Davidor, Y., Männer, R., Schwefel, H.-P. (eds.) PPSN 1994. LNCS, vol. 866, pp. 397–406. Springer, Heidelberg (1994)CrossRefGoogle Scholar
  16. 16.
    Poli, R., Langdon, W.B., McPhee, N.F.: A field guide to genetic programming (2008), Published via and freely available at (With contributions by J. R. Koza)
  17. 17.
    Poli, R., McPhee, N.F.: General schema theory for genetic programming with subtree-swapping crossover: II. Evolutionary Computation 11(2), 169–206 (2003)CrossRefGoogle Scholar
  18. 18.
    Rice, J.R.: The algorithm selection problem. Advances in Computers 15, 65–118 (1976)CrossRefGoogle Scholar
  19. 19.
    Tomassini, M., Vanneschi, L., Collard, P., Clergue, M.: A study of fitness distance correlation as a difficulty measure in genetic programming. Evolutionary Computation 13(2), 213–239 (Summer 2005)CrossRefzbMATHGoogle Scholar
  20. 20.
    Vanneschi, L., Clergue, M., Collard, P., Tomassini, M., Vérel, S.: Fitness clouds and problem hardness in genetic programming. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 690–701. Springer, Heidelberg (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Mario Graff
    • 1
  • Riccardo Poli
    • 2
  1. 1.Division de Estudios de Posgrado, Facultad de Ingenieria ElectricaUniversidad Michoacana de San Nicolas de HidalgoMexico
  2. 2.School of Computer Science and Electronic EngineeringUniversity of EssexUK

Personalised recommendations