Skip to main content

Enhancing Competitive Island Cooperative Neuro-Evolution Through Backpropagation for Pattern Classification

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9489))

Included in the following conference series:

Abstract

Cooperative coevolution is a promising method for training neural networks which is also known as cooperative neuro-evolution. Cooperative neuro-evolution has been used for pattern classification, time series prediction and global optimisation problems. In the past, competitive island based cooperative coevolution has been proposed that employed different instances of problem decomposition methods for competition. Neuro-evolution has limitations in terms of training time although they are known as global search methods. Backpropagation algorithm employs gradient descent which helps in faster convergence which is needed for neuro-evolution. Backpropagation suffers from premature convergence and its combination with neuro-evolution can help eliminate the weakness of both the approaches. In this paper, we propose a competitive island cooperative neuro-evolutionary method that takes advantage of the strengths of gradient descent and neuro-evolution. We use feedforward neural networks on benchmark pattern classification problems to evaluate the performance of the proposed algorithm. The results show improved performance when compared to related methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Keys for Tables 2 and 3

    \(\mathbf {\bar{x}}\) ev = Mean Fitness Evaluations, \(\mathbf {\bar{x}}\) er = Mean Generalisation Performance, (H) = No. Hidden Neurons, and (sr) = Success Rate

References

  1. Potter, M., De Jong, K.: A cooperative coevolutionary approach to function optimization. In: Davidor, Y., Schwefel, H.-P., Mnner, R. (eds.) PPSN III. LNCS, vol. 866, pp. 249–257. Springer, Heidelberg (1994)

    Google Scholar 

  2. Chandra, R., Frean, M., Zhang, M.: On the issue of separability for problem decomposition in cooperative neuro-evolution. Neurocomputing 87, 33–40 (2012)

    Article  Google Scholar 

  3. Omidvar, M., Li, X., Yao, X.: Cooperative co-evolution with delta grouping for large scale non-separable function optimization. IEEE Congr. Evol. Comput. (CEC) 2010, 1762–1779 (2010)

    Google Scholar 

  4. Chandra, R.: Competitive two-island cooperative coevolution for training Elman recurrent networks for time series prediction. In: International Joint Conference on Neural Networks (IJCNN), Beijing, China, pp. 565–572, July 2014

    Google Scholar 

  5. Chandra, R., Zhang, M.: Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction. Neurocomputing 186, 116–123 (2012)

    Article  Google Scholar 

  6. Garcia-Pedrajas, N., Hervas-Martinez, C., Munoz-Perez, J.: COVNET: a cooperative coevolutionary model for evolving artificial neural networks. IEEE Trans. Neural Netw. 14(3), 575–596 (2003)

    Article  Google Scholar 

  7. Potter, M.A., De Jong, K.A.: Cooperative coevolution: an architecture for evolving coadapted subcomponents. Evol. Comput. 8, 1–29 (2000)

    Article  Google Scholar 

  8. Gomez, F., Schmidhuber, J., Miikkulainen, R.: Accelerated neural evolution through cooperatively coevolved synapses. J. Mach. Learn. Res. 9, 937–965 (2008)

    MathSciNet  MATH  Google Scholar 

  9. Chandra, R., Frean, M., Zhang, M.: An encoding scheme for cooperative coevolutionary feedforward neural networks. In: Li, J. (ed.) AI 2010. LNCS, vol. 6464, pp. 253–262. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  10. Chandra, R.: Adaptive problem decomposition in cooperative coevolution of recurrent networks for time series prediction, In: International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, pp. 1–8, August 2013

    Google Scholar 

  11. Chandra, R., Frean, M., Zhang, M.: Adapting modularity during learning in cooperative co-evolutionary recurrent neural networks. Soft Comput. A Fusion Found. Methodol. Appl. 16(6), 1009–1020 (2012)

    Google Scholar 

  12. Chandra, R., Frean, M.R., Zhang, M.: Crossover-based local search in cooperative co-evolutionary feedforward neural networks. Appl. Soft Comput. 12(9), 2924–2932 (2012)

    Article  Google Scholar 

  13. Chandra, R.: Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time-series prediction. IEEE Trans. Neural Netw. Learn. Syst. p. (2015, in press)

    Google Scholar 

  14. Chandra, R., Wong, G.: Competitive two-island cooperative coevolution for pattern classification problems. In: International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, pp. 393–400, July 2015

    Google Scholar 

  15. Bella, G.: A bug’s life: competition among species towards the environment. ser. Fondazione Eni Enrico Mattei Working Papers. Fondazione Eni Enrico Mattei (2007)

    Google Scholar 

  16. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D.E., McClelland, J.L., CORPORATE PDP Research Group (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, pp. 318–362. MIT Press, Cambridge (1986). http://dl.acm.org/citation.cfm?id=104293

  17. Asuncion, A., Newman, D.: UCI machine learning repository (2007). http://archive.ics.uci.edu/ml/datasets.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rohitash Chandra .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Wong, G., Chandra, R. (2015). Enhancing Competitive Island Cooperative Neuro-Evolution Through Backpropagation for Pattern Classification. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9489. Springer, Cham. https://doi.org/10.1007/978-3-319-26532-2_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-26532-2_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-26531-5

  • Online ISBN: 978-3-319-26532-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics