Advertisement

Exploiting Tournament Selection for Efficient Parallel Genetic Programming

  • Darren M. Chitty
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 840)

Abstract

Genetic Programming (GP) is a computationally intensive technique which is naturally parallel in nature. Consequently, many attempts have been made to improve its run-time from exploiting highly parallel hardware such as GPUs. However, a second methodology of improving the speed of GP is through efficiency techniques such as subtree caching. However achieving parallel performance and efficiency is a difficult task. This paper will demonstrate an efficiency saving for GP compatible with the harnessing of parallel CPU hardware by exploiting tournament selection. Significant efficiency savings are demonstrated whilst retaining the capability of a high performance parallel implementation of GP. Indeed, a 74% improvement in the speed of GP is achieved with a peak rate of 96 billion GPop/s for classification type problems.

Keywords

Genetic Programming HPC Computational Efficiency 

References

  1. 1.
    Augusto, D.A., Barbosa, H.J.: Accelerated parallel genetic programming tree evaluation with OpenCL. J. Parallel Distrib. Comput. 73(1), 86–100 (2013)CrossRefGoogle Scholar
  2. 2.
    Cano, A., Zafra, A., Ventura, S.: Speeding up the evaluation phase of GP classification algorithms on GPUs. Soft Comput. 16(2), 187–202 (2012)CrossRefGoogle Scholar
  3. 3.
    Chitty, D.M.: Fast parallel genetic programming: multi-core CPU versus many-core GPU. Soft Comput. 16(10), 1795–1814 (2012)CrossRefGoogle Scholar
  4. 4.
    Chitty, D.M.: Improving the performance of GPU-based genetic programming through exploitation of on-chip memory. Soft Comput. 20(2), 661–680 (2016)CrossRefGoogle Scholar
  5. 5.
    Chitty, D.M.: Faster GPU-based genetic programming using a two-dimensional stack. Soft Comput. 21(14), 3859–3878 (2017)CrossRefGoogle Scholar
  6. 6.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010)Google Scholar
  7. 7.
    Gathercole, C., Ross, P.: Dynamic training subset selection for supervised learning in genetic programming. In: International Conference on Parallel Problem Solving from Nature, pp. 312–321. Springer (1994)Google Scholar
  8. 8.
    Gathercole, C., Ross, P.: Tackling the boolean even N parity problem with genetic programming and limited-error fitness. Genet. Program. 97, 119–127 (1997)Google Scholar
  9. 9.
    Koza, J.R.: Genetic programming (1992)Google Scholar
  10. 10.
    Maxwell, S.R.: Experiments with a coroutine execution model for genetic programming. In: Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, pp. 413–417. IEEE (1994)Google Scholar
  11. 11.
    Park, N., Kim, K., McKay, R.I.: Cutting evaluation costs: an investigation into early termination in genetic programming. In: 2013 IEEE Congress on Evolutionary Computation (CEC), pp. 3291–3298. IEEE (2013)Google Scholar
  12. 12.
    Poli, R., Langdon, W.B.: Backward-chaining evolutionary algorithms. Artif. Intell. 170(11), 953–982 (2006)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Poli, R., Langdon, W.: Running genetic programming backwards. In: Yu, T., Riolo, R., Worzel, B. (eds.) Genetic Programming Theory and Practice III, Genetic Programming, vol. 9, pp. 125–140. Springer (2006)Google Scholar
  14. 14.
    Teller, A.: Genetic programming, indexed memory, the halting problem, and other curiosities. In: Proceedings of the 7th Annual Florida Artificial Intelligence Research Symposium, pp. 270–274 (1994)Google Scholar
  15. 15.
    Teller, A., Andre, D.: Automatically choosing the number of fitness cases: the rational allocation of trials. Genet. Program. 97, 321–328 (1997)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of BristolBristolUK

Personalised recommendations