Abstract
This chapter trains Multi-Later Perceptron (MLP) using several optimisation algorithms. A set of test and real-world case studies is employed to compare the proposed evolutionary trainers.
Part of this chapter has been reprinted from Seyedali Mirjalili, Seyed Mohamed Mirjalili and Andrew Lewis article: Let a biogeography-based optimizer train your multilayer perceptron, Information Science, Volume 269, pp. 188–209, 2014
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4), 115–133.
Fahlman, S. E. (1988). An empirical study of learning speed in back-propagation networks.
Raudys, Š. (1998). Evolution and generalization of a single neurone: I. Single-layer perceptron as seven statistical classifiers. Neural Networks, 11(2), 283–296.
Amendolia, S. R., Cossu, G., Ganadu, M. L., Golosio, B., Masala, G. L., & Mura, G. M. (2003). A comparative study of k-nearest neighbour, support vector machine and multi-layer perceptron for thalassemia screening. Chemometrics and Intelligent Laboratory Systems, 69(1–2), 13–20.
Melin, P., Snchez, D., & Castillo, O. (2012). Genetic optimization of modular neural networks with fuzzy response integration for human recognition. Information Sciences, 197, 1–19.
Guo, Z. X., Wong, W. K., & Li, M. (2012). Sparsely connected neural network-based time series forecasting. Information Sciences, 193, 54–71.
Gardner, M. W., & Dorling, S. R. (1998). Artificial neural networks (the multilayer perceptron) a review of applications in the atmospheric sciences. Atmospheric Environment, 32(14–15), 2627–2636.
Barakat, M., Lefebvre, D., Khalil, M., Druaux, F., & Mustapha, O. (2013). Parameter selection algorithm with self adaptive growing neural network classifier for diagnosis issues. International Journal of Machine Learning and Cybernetics, 4(3), 217–233.
Csji, B. C. (2001). Approximation with artificial neural networks. Faculty of Sciences, Etvs Lornd University, Hungary, 24, 48.
Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359–366.
Hush, D. R., & Horne, B. G. (1993). Progress in supervised neural networks. IEEE Signal Processing Magazine, 10(1), 8–39.
Hagan, M. T., & Menhaj, M. B. (1994). Training feedforward networks with the Marquardt algorithm. IEEE Transactions on Neural Networks, 5(6), 989–993.
Ng, S. C., Cheung, C. C., Leung, S. H., & Luk, A. (2003). Fast convergence for backpropagation network with magnified gradient function. In 2003 Proceedings of the international joint conference on neural networks (Vol. 3, pp. 1903-1908). IEEE.
Lee, Y., Oh, S. H., & Kim, M. W. (1993). An analysis of premature saturation in back propagation learning. Neural Networks, 6(5), 719–728.
Weir, M. K. (1991). A method for self-determination of adaptive learning rates in back propagation. Neural Networks, 4(3), 371–379.
Yao, X. (1993). Evolutionary artificial neural networks. International Journal of Neural Systems, 4(03), 203–222.
Gudise, V. G., & Venayagamoorthy, G. K. (2003). Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In Proceedings of the 2003 swarm intelligence symposium, SIS’03 (pp. 110–117). IEEE.
Yu, J., Wang, S., & Xi, L. (2008). Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing, 71(4–6), 1054–1060.
Leung, F. H. F., Lam, H. K., Ling, S. H., & Tam, P. K. S. (2003). Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Transactions on Neural networks, 14(1), 79–88.
Mizuta, S., Sato, T., Lao, D., Ikeda, M., & Shimizu, T. (2001). Structure design of neural networks using genetic algorithms. Complex Systems, 13(2), 161–176.
Derrac, J., Garca, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation, 1(1), 3–18.
Mirjalili, S., & Lewis, A. (2013). S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm and Evolutionary Computation, 9, 1–14.
Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics Bulletin, 1(6), 80–83.
Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Let a biogeography-based optimizer train your multi-layer perceptron. Information Sciences, 269, 188–209.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Mirjalili, S. (2019). Evolutionary Multi-layer Perceptron. In: Evolutionary Algorithms and Neural Networks. Studies in Computational Intelligence, vol 780. Springer, Cham. https://doi.org/10.1007/978-3-319-93025-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-93025-1_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-93024-4
Online ISBN: 978-3-319-93025-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)