Skip to main content

Learning in Artificial Neural Networks

  • Chapter
  • First Online:
ECG Signal Processing, Classification and Interpretation
  • 3488 Accesses

Abstract

This chapter gives a general overview of Artificial Neural Networks Learning from the perspectives of Statistical Learning Theory and Multi-objective Optimization. Both approaches treat the general learning problem as a trade-off between the empirical risk obtained from the data set and the model complexity. Learning is seen as a problem of fitting model output to the data, and model complexity to system complexity. Since the later is not known in advance, only bounds to model complexity can be assumed in advance, so model selection can only be accomplished with ad-hoc decision making strategies, like the ones provided by Multi-objective learning. The main concepts of Multi-objective learning are then presented in the context of ECG problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Bartlett, P.: For valid generalization, the size of the weights is more important than the size of the network. In: Proceedings of NIPS, vol. 9, pp. 134–140. Morgan Kaufmann Publishers, San Mateo (1997)

    Google Scholar 

  • Bland, R.G., Goldfarb, D., Todd, M.J.: The ellipsoidal method: A survey. Operations Res. 29, 1039–1091 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  • Boser, B., Guyon, I., Vapnik, V.: A training algorithm for optimal margin classfiers. In: Fifth Annual Workshop on Computational Learning Theory, pp. 144–152. ACM, New York (1992)

    Google Scholar 

  • Braga, A.P., Takahashi, R., Teixeira, R., Costa, M.A.: Multi- objective algorithms for neural networks learning. In: Jin, Y. (ed.) Multiobjective Machine Learning, pp. 151–172. Springer-Verlag, Berlin/Heidelberg (2006)

    Chapter  Google Scholar 

  • Broomhead, D.S., Lowe, D.: Multivariable function interpolation and adaptive networks. Complex Syst. 2, 321–355 (1988)

    MathSciNet  MATH  Google Scholar 

  • Chankong, V., Haimes, Y.Y.: Multiobjective Decision Making: Theory and Methodology, vol. 8. North-Holland (Elsevier), New York (1983)

    Google Scholar 

  • Craven, M.W.: Extracting Comprehensible Models From Trained Neural Network. PhD thesis, University of Wisconsin, Madison (1996)

    Google Scholar 

  • Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. In: Lippmann, R.P., Moody, J.E., Touretzky, D.S. (eds.) Advances in Neural Information Processing Systems 2. Morgan Kaufmann, San Mateo (1988)

    Google Scholar 

  • Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugenics 7, 179–188 (1936)

    Article  Google Scholar 

  • Geman, S., Bienenstock, E., Doursat, R.: Neural networks and the bias-variance dilemma. Neural Comput. 4, 1–58 (1992)

    Article  Google Scholar 

  • Guvenir, H.A., Acar, B., Demiroz, G., Cekin, A.: A supervised machine learning algorithm for arrhythmia analysis. In: Computers in Cardiology, pp. 433–436. IEEE, Piscataway (1997)

    Google Scholar 

  • Guyon, I.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)

    MATH  Google Scholar 

  • Hagan, M., Menhaj, M.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 5, 989–993 (1994)

    Article  Google Scholar 

  • Haimes, Y., Lasdon, L., Wismer, D.: On a bicriterion formulation of the problems of integrated system identification and system optimization. IEEE Trans. Syst. Man Cybern. 1, 296–297 (1971)

    Article  MathSciNet  MATH  Google Scholar 

  • Karnin, E.D.: A simple procedure for pruning back-propagation trained neural networks. IEEE Trans. Neural Netw. 1, 239–242 (1990)

    Article  Google Scholar 

  • Kokshenev, I., Braga, A.: Complexity bounds and multi- objective learning of radial basis functions. Neurocomput. 71, 1203–1209 (2008)

    Article  Google Scholar 

  • LeCun, Y., Denker, J., Solla, S.: Optimal brain damage. In: Touretzky, D. (ed.) Neural Information Processing Systems, vol. 2, pp. 598–605. Morgan Kaufmann, San Mateo/Denver (1990)

    Google Scholar 

  • Liu, G., Kadirkamanathan, V.: Learning with multi-objective criteria. In: International Conference on Neural Networks (UK), pp. 53–58. IEEE, Perth (1995)

    Google Scholar 

  • Medeiros, T., Braga, A.P.: A new decision strategy in multi-objective training of artificial neural networks. In: European Symposium on Neural Networks (ESANN07), pp. 555–560. Bruges (2007)

    Google Scholar 

  • Mezard, M., Nadal, J.: Learning in feedforward neural net- works: The tiling algorithm. J. Phys. A: Math Gen. 22, 2191–2203 (1989)

    Article  MathSciNet  Google Scholar 

  • Mozer, M.C., Smolensky, P.: Skeletonization: A technique for trimming the fat from a network via relabance assessment. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems, vol. 1, pp. 107–115. Morgan Kaufmann, New York (1989)

    Google Scholar 

  • Rosenblatt, F.: The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386–408 (1958)

    Article  MathSciNet  Google Scholar 

  • Shannon, C.: A mathematical theory of communication. Bell. Syst. Tech. J. 27, 379–423 (1948)

    MathSciNet  MATH  Google Scholar 

  • Teixeira, R., Braga, A., Takahashi, R., Rezende, R.: Improving generalization of mlps with multi-objetive optimization. Neurocomput. 35, 189–194 (2000)

    Article  MATH  Google Scholar 

  • Teixeira, R.A., Braga, A., Saldanha, R.R., Takahashi, R.H.C., Medeiros, T.H.: The usage of golden section in calculating the efficient solution in artificial neural networks training by multi-objective optimization. In: International Conference on Neural Networks (ICANN07). Springer, Berlin/Heidelberg (2007)

    Google Scholar 

  • Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)

    MATH  Google Scholar 

  • Wahba, G.: Generalization and regularization in nonlinear learning systems. Technical Report 921, University of Winsconsin, Madison (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antônio Pádua Braga .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag London Limited

About this chapter

Cite this chapter

Braga, A.P. (2012). Learning in Artificial Neural Networks. In: Gacek, A., Pedrycz, W. (eds) ECG Signal Processing, Classification and Interpretation. Springer, London. https://doi.org/10.1007/978-0-85729-868-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-0-85729-868-3_8

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-0-85729-867-6

  • Online ISBN: 978-0-85729-868-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics