Advertisement

Learning in Artificial Neural Networks

  • Antônio Pádua Braga
Chapter

Abstract

This chapter gives a general overview of Artificial Neural Networks Learning from the perspectives of Statistical Learning Theory and Multi-objective Optimization. Both approaches treat the general learning problem as a trade-off between the empirical risk obtained from the data set and the model complexity. Learning is seen as a problem of fitting model output to the data, and model complexity to system complexity. Since the later is not known in advance, only bounds to model complexity can be assumed in advance, so model selection can only be accomplished with ad-hoc decision making strategies, like the ones provided by Multi-objective learning. The main concepts of Multi-objective learning are then presented in the context of ECG problems.

Keywords

Joint Probability Density Function Empirical Risk Statistical Learn Theory Model Capacity Right Bundle Branch Block 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Bartlett, P.: For valid generalization, the size of the weights is more important than the size of the network. In: Proceedings of NIPS, vol. 9, pp. 134–140. Morgan Kaufmann Publishers, San Mateo (1997)Google Scholar
  2. Bland, R.G., Goldfarb, D., Todd, M.J.: The ellipsoidal method: A survey. Operations Res. 29, 1039–1091 (1981)MathSciNetMATHCrossRefGoogle Scholar
  3. Boser, B., Guyon, I., Vapnik, V.: A training algorithm for optimal margin classfiers. In: Fifth Annual Workshop on Computational Learning Theory, pp. 144–152. ACM, New York (1992)Google Scholar
  4. Braga, A.P., Takahashi, R., Teixeira, R., Costa, M.A.: Multi- objective algorithms for neural networks learning. In: Jin, Y. (ed.) Multiobjective Machine Learning, pp. 151–172. Springer-Verlag, Berlin/Heidelberg (2006)CrossRefGoogle Scholar
  5. Broomhead, D.S., Lowe, D.: Multivariable function interpolation and adaptive networks. Complex Syst. 2, 321–355 (1988)MathSciNetMATHGoogle Scholar
  6. Chankong, V., Haimes, Y.Y.: Multiobjective Decision Making: Theory and Methodology, vol. 8. North-Holland (Elsevier), New York (1983)Google Scholar
  7. Craven, M.W.: Extracting Comprehensible Models From Trained Neural Network. PhD thesis, University of Wisconsin, Madison (1996)Google Scholar
  8. Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. In: Lippmann, R.P., Moody, J.E., Touretzky, D.S. (eds.) Advances in Neural Information Processing Systems 2. Morgan Kaufmann, San Mateo (1988)Google Scholar
  9. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugenics 7, 179–188 (1936)CrossRefGoogle Scholar
  10. Geman, S., Bienenstock, E., Doursat, R.: Neural networks and the bias-variance dilemma. Neural Comput. 4, 1–58 (1992)CrossRefGoogle Scholar
  11. Guvenir, H.A., Acar, B., Demiroz, G., Cekin, A.: A supervised machine learning algorithm for arrhythmia analysis. In: Computers in Cardiology, pp. 433–436. IEEE, Piscataway (1997)Google Scholar
  12. Guyon, I.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)MATHGoogle Scholar
  13. Hagan, M., Menhaj, M.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 5, 989–993 (1994)CrossRefGoogle Scholar
  14. Haimes, Y., Lasdon, L., Wismer, D.: On a bicriterion formulation of the problems of integrated system identification and system optimization. IEEE Trans. Syst. Man Cybern. 1, 296–297 (1971)MathSciNetMATHCrossRefGoogle Scholar
  15. Karnin, E.D.: A simple procedure for pruning back-propagation trained neural networks. IEEE Trans. Neural Netw. 1, 239–242 (1990)CrossRefGoogle Scholar
  16. Kokshenev, I., Braga, A.: Complexity bounds and multi- objective learning of radial basis functions. Neurocomput. 71, 1203–1209 (2008)CrossRefGoogle Scholar
  17. LeCun, Y., Denker, J., Solla, S.: Optimal brain damage. In: Touretzky, D. (ed.) Neural Information Processing Systems, vol. 2, pp. 598–605. Morgan Kaufmann, San Mateo/Denver (1990)Google Scholar
  18. Liu, G., Kadirkamanathan, V.: Learning with multi-objective criteria. In: International Conference on Neural Networks (UK), pp. 53–58. IEEE, Perth (1995)Google Scholar
  19. Medeiros, T., Braga, A.P.: A new decision strategy in multi-objective training of artificial neural networks. In: European Symposium on Neural Networks (ESANN07), pp. 555–560. Bruges (2007)Google Scholar
  20. Mezard, M., Nadal, J.: Learning in feedforward neural net- works: The tiling algorithm. J. Phys. A: Math Gen. 22, 2191–2203 (1989)MathSciNetCrossRefGoogle Scholar
  21. Mozer, M.C., Smolensky, P.: Skeletonization: A technique for trimming the fat from a network via relabance assessment. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems, vol. 1, pp. 107–115. Morgan Kaufmann, New York (1989)Google Scholar
  22. Rosenblatt, F.: The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386–408 (1958)MathSciNetCrossRefGoogle Scholar
  23. Shannon, C.: A mathematical theory of communication. Bell. Syst. Tech. J. 27, 379–423 (1948)MathSciNetMATHGoogle Scholar
  24. Teixeira, R., Braga, A., Takahashi, R., Rezende, R.: Improving generalization of mlps with multi-objetive optimization. Neurocomput. 35, 189–194 (2000)MATHCrossRefGoogle Scholar
  25. Teixeira, R.A., Braga, A., Saldanha, R.R., Takahashi, R.H.C., Medeiros, T.H.: The usage of golden section in calculating the efficient solution in artificial neural networks training by multi-objective optimization. In: International Conference on Neural Networks (ICANN07). Springer, Berlin/Heidelberg (2007)Google Scholar
  26. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)MATHGoogle Scholar
  27. Wahba, G.: Generalization and regularization in nonlinear learning systems. Technical Report 921, University of Winsconsin, Madison (1994)Google Scholar

Copyright information

© Springer-Verlag London Limited 2012

Authors and Affiliations

  1. 1.Departamento de Engenharia EletrnicaUniversidade Federal de Minas GeraisBelo HorizonteBrazil

Personalised recommendations