Skip to main content

Evolutionary Multi-layer Perceptron

  • Chapter
  • First Online:
Book cover Evolutionary Algorithms and Neural Networks

Part of the book series: Studies in Computational Intelligence ((SCI,volume 780))

Abstract

This chapter trains Multi-Later Perceptron (MLP) using several optimisation algorithms. A set of test and real-world case studies is employed to compare the proposed evolutionary trainers.

Part of this chapter has been reprinted from Seyedali Mirjalili, Seyed Mohamed Mirjalili and Andrew Lewis article: Let a biogeography-based optimizer train your multilayer perceptron, Information Science, Volume 269, pp. 188–209, 2014

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4), 115–133.

    Article  MathSciNet  Google Scholar 

  2. Fahlman, S. E. (1988). An empirical study of learning speed in back-propagation networks.

    Google Scholar 

  3. Raudys, Š. (1998). Evolution and generalization of a single neurone: I. Single-layer perceptron as seven statistical classifiers. Neural Networks, 11(2), 283–296.

    Article  Google Scholar 

  4. Amendolia, S. R., Cossu, G., Ganadu, M. L., Golosio, B., Masala, G. L., & Mura, G. M. (2003). A comparative study of k-nearest neighbour, support vector machine and multi-layer perceptron for thalassemia screening. Chemometrics and Intelligent Laboratory Systems, 69(1–2), 13–20.

    Article  Google Scholar 

  5. Melin, P., Snchez, D., & Castillo, O. (2012). Genetic optimization of modular neural networks with fuzzy response integration for human recognition. Information Sciences, 197, 1–19.

    Article  Google Scholar 

  6. Guo, Z. X., Wong, W. K., & Li, M. (2012). Sparsely connected neural network-based time series forecasting. Information Sciences, 193, 54–71.

    Article  Google Scholar 

  7. Gardner, M. W., & Dorling, S. R. (1998). Artificial neural networks (the multilayer perceptron) a review of applications in the atmospheric sciences. Atmospheric Environment, 32(14–15), 2627–2636.

    Article  Google Scholar 

  8. Barakat, M., Lefebvre, D., Khalil, M., Druaux, F., & Mustapha, O. (2013). Parameter selection algorithm with self adaptive growing neural network classifier for diagnosis issues. International Journal of Machine Learning and Cybernetics, 4(3), 217–233.

    Article  Google Scholar 

  9. Csji, B. C. (2001). Approximation with artificial neural networks. Faculty of Sciences, Etvs Lornd University, Hungary, 24, 48.

    Google Scholar 

  10. Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359–366.

    Article  Google Scholar 

  11. Hush, D. R., & Horne, B. G. (1993). Progress in supervised neural networks. IEEE Signal Processing Magazine, 10(1), 8–39.

    Article  Google Scholar 

  12. Hagan, M. T., & Menhaj, M. B. (1994). Training feedforward networks with the Marquardt algorithm. IEEE Transactions on Neural Networks, 5(6), 989–993.

    Article  Google Scholar 

  13. Ng, S. C., Cheung, C. C., Leung, S. H., & Luk, A. (2003). Fast convergence for backpropagation network with magnified gradient function. In 2003 Proceedings of the international joint conference on neural networks (Vol. 3, pp. 1903-1908). IEEE.

    Google Scholar 

  14. Lee, Y., Oh, S. H., & Kim, M. W. (1993). An analysis of premature saturation in back propagation learning. Neural Networks, 6(5), 719–728.

    Article  Google Scholar 

  15. Weir, M. K. (1991). A method for self-determination of adaptive learning rates in back propagation. Neural Networks, 4(3), 371–379.

    Article  Google Scholar 

  16. Yao, X. (1993). Evolutionary artificial neural networks. International Journal of Neural Systems, 4(03), 203–222.

    Article  Google Scholar 

  17. Gudise, V. G., & Venayagamoorthy, G. K. (2003). Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In Proceedings of the 2003 swarm intelligence symposium, SIS’03 (pp. 110–117). IEEE.

    Google Scholar 

  18. Yu, J., Wang, S., & Xi, L. (2008). Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing, 71(4–6), 1054–1060.

    Article  Google Scholar 

  19. Leung, F. H. F., Lam, H. K., Ling, S. H., & Tam, P. K. S. (2003). Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Transactions on Neural networks, 14(1), 79–88.

    Article  Google Scholar 

  20. Mizuta, S., Sato, T., Lao, D., Ikeda, M., & Shimizu, T. (2001). Structure design of neural networks using genetic algorithms. Complex Systems, 13(2), 161–176.

    MathSciNet  MATH  Google Scholar 

  21. Derrac, J., Garca, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation, 1(1), 3–18.

    Article  Google Scholar 

  22. Mirjalili, S., & Lewis, A. (2013). S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm and Evolutionary Computation, 9, 1–14.

    Article  Google Scholar 

  23. Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics Bulletin, 1(6), 80–83.

    Article  Google Scholar 

  24. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Let a biogeography-based optimizer train your multi-layer perceptron. Information Sciences, 269, 188–209.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seyedali Mirjalili .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mirjalili, S. (2019). Evolutionary Multi-layer Perceptron. In: Evolutionary Algorithms and Neural Networks. Studies in Computational Intelligence, vol 780. Springer, Cham. https://doi.org/10.1007/978-3-319-93025-1_7

Download citation

Publish with us

Policies and ethics