Abstract
This chapter gives a general overview of Artificial Neural Networks Learning from the perspectives of Statistical Learning Theory and Multi-objective Optimization. Both approaches treat the general learning problem as a trade-off between the empirical risk obtained from the data set and the model complexity. Learning is seen as a problem of fitting model output to the data, and model complexity to system complexity. Since the later is not known in advance, only bounds to model complexity can be assumed in advance, so model selection can only be accomplished with ad-hoc decision making strategies, like the ones provided by Multi-objective learning. The main concepts of Multi-objective learning are then presented in the context of ECG problems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bartlett, P.: For valid generalization, the size of the weights is more important than the size of the network. In: Proceedings of NIPS, vol. 9, pp. 134–140. Morgan Kaufmann Publishers, San Mateo (1997)
Bland, R.G., Goldfarb, D., Todd, M.J.: The ellipsoidal method: A survey. Operations Res. 29, 1039–1091 (1981)
Boser, B., Guyon, I., Vapnik, V.: A training algorithm for optimal margin classfiers. In: Fifth Annual Workshop on Computational Learning Theory, pp. 144–152. ACM, New York (1992)
Braga, A.P., Takahashi, R., Teixeira, R., Costa, M.A.: Multi- objective algorithms for neural networks learning. In: Jin, Y. (ed.) Multiobjective Machine Learning, pp. 151–172. Springer-Verlag, Berlin/Heidelberg (2006)
Broomhead, D.S., Lowe, D.: Multivariable function interpolation and adaptive networks. Complex Syst. 2, 321–355 (1988)
Chankong, V., Haimes, Y.Y.: Multiobjective Decision Making: Theory and Methodology, vol. 8. North-Holland (Elsevier), New York (1983)
Craven, M.W.: Extracting Comprehensible Models From Trained Neural Network. PhD thesis, University of Wisconsin, Madison (1996)
Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. In: Lippmann, R.P., Moody, J.E., Touretzky, D.S. (eds.) Advances in Neural Information Processing Systems 2. Morgan Kaufmann, San Mateo (1988)
Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugenics 7, 179–188 (1936)
Geman, S., Bienenstock, E., Doursat, R.: Neural networks and the bias-variance dilemma. Neural Comput. 4, 1–58 (1992)
Guvenir, H.A., Acar, B., Demiroz, G., Cekin, A.: A supervised machine learning algorithm for arrhythmia analysis. In: Computers in Cardiology, pp. 433–436. IEEE, Piscataway (1997)
Guyon, I.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)
Hagan, M., Menhaj, M.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 5, 989–993 (1994)
Haimes, Y., Lasdon, L., Wismer, D.: On a bicriterion formulation of the problems of integrated system identification and system optimization. IEEE Trans. Syst. Man Cybern. 1, 296–297 (1971)
Karnin, E.D.: A simple procedure for pruning back-propagation trained neural networks. IEEE Trans. Neural Netw. 1, 239–242 (1990)
Kokshenev, I., Braga, A.: Complexity bounds and multi- objective learning of radial basis functions. Neurocomput. 71, 1203–1209 (2008)
LeCun, Y., Denker, J., Solla, S.: Optimal brain damage. In: Touretzky, D. (ed.) Neural Information Processing Systems, vol. 2, pp. 598–605. Morgan Kaufmann, San Mateo/Denver (1990)
Liu, G., Kadirkamanathan, V.: Learning with multi-objective criteria. In: International Conference on Neural Networks (UK), pp. 53–58. IEEE, Perth (1995)
Medeiros, T., Braga, A.P.: A new decision strategy in multi-objective training of artificial neural networks. In: European Symposium on Neural Networks (ESANN07), pp. 555–560. Bruges (2007)
Mezard, M., Nadal, J.: Learning in feedforward neural net- works: The tiling algorithm. J. Phys. A: Math Gen. 22, 2191–2203 (1989)
Mozer, M.C., Smolensky, P.: Skeletonization: A technique for trimming the fat from a network via relabance assessment. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems, vol. 1, pp. 107–115. Morgan Kaufmann, New York (1989)
Rosenblatt, F.: The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386–408 (1958)
Shannon, C.: A mathematical theory of communication. Bell. Syst. Tech. J. 27, 379–423 (1948)
Teixeira, R., Braga, A., Takahashi, R., Rezende, R.: Improving generalization of mlps with multi-objetive optimization. Neurocomput. 35, 189–194 (2000)
Teixeira, R.A., Braga, A., Saldanha, R.R., Takahashi, R.H.C., Medeiros, T.H.: The usage of golden section in calculating the efficient solution in artificial neural networks training by multi-objective optimization. In: International Conference on Neural Networks (ICANN07). Springer, Berlin/Heidelberg (2007)
Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)
Wahba, G.: Generalization and regularization in nonlinear learning systems. Technical Report 921, University of Winsconsin, Madison (1994)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag London Limited
About this chapter
Cite this chapter
Braga, A.P. (2012). Learning in Artificial Neural Networks. In: Gacek, A., Pedrycz, W. (eds) ECG Signal Processing, Classification and Interpretation. Springer, London. https://doi.org/10.1007/978-0-85729-868-3_8
Download citation
DOI: https://doi.org/10.1007/978-0-85729-868-3_8
Published:
Publisher Name: Springer, London
Print ISBN: 978-0-85729-867-6
Online ISBN: 978-0-85729-868-3
eBook Packages: EngineeringEngineering (R0)