Skip to main content

Training Neural Networks Using Non-standard Norms – Preliminary Results

  • Conference paper
Book cover MICAI 2000: Advances in Artificial Intelligence (MICAI 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1793))

Included in the following conference series:

Abstract

We discuss alternative norms to train Neural Networks (NNs). We focus on the so called Multilayer Perceptrons (MLPs). To achieve this we rely on a Genetic Algorithm called an Eclectic GA (EGA). By using the EGA we avoid the drawbacks of the standard training algorithm in this sort of NNs: the backpropagation algorithm. We define four measures of distance: a) The mean exponential error (MEE), b) The mean absolute error (MAE), c) The maximum square error (MSE) and d) The maximum (supremum) absolute error (SAE). We analyze the behavior of an MLP NN on two kinds of problems: Classification and Forecasting. We discuss the results of applying an EGA to train the NNs and show that alternative norms yield better results than the traditional RMS norm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Rumelhart, D.E., Hintoen, G.E., Williams, R.J.: Learning internal representations by erro propagation. In: Rumelhart, D.E., McClelland, J.L. (eds.) and the PDP Research group, Parallel Distributed Processing, Volume I, Foundations. MIT Press, Cambridge (1986)

    Google Scholar 

  2. Montana, D.J., Davis, L.D.: Training feedforward Networks using Genetic Algorithms. In: Proceedings of the International Joint Conference on Artificial Intelligence. Morgan Kauffman, San Francisco (1989)

    Google Scholar 

  3. Miller, G.F., Todd, P.M., Hedge, S.U.: Designing neural networks using genetic algorithms. In: Schaffer, J.D. (ed.) Proceedings of the Third International Conference on Genetic Algorithms. Morgan Kauffman, San Francisco (1989)

    Google Scholar 

  4. Kitano, H.: Designing Neural Networks using genetic algorithms with graph generation system. Complex Systems 4, 461–476 (1990)

    MATH  Google Scholar 

  5. Rudolph, G.: Convergence Analysis of Canonical Genetic Algorithms. IEEE Transactions on Neural Networks 5(1), 96–101 (1997)

    Article  Google Scholar 

  6. Goldberg, D.E.: Simple Genetic Algorithms and the Minimal Deceptive Problem. In: Davis, L.D. (ed.) Genetic Algorithms and Simulated Annealing. Research Notes in Artificial Intelligence. Morgan Kauffman, Los Altos (1987)

    Google Scholar 

  7. Mitchell, M., Forrest, S., Holland, J.: The Royal Road for Genetic Algorithms: Fitness Landscapes and GA Performance. In: Varela, F.J., Bourgine, P. (eds.) Toward a practice of autonomous systems: Proceedings of the First European Conference on Artificial Life. MIT Press, Cambridge (1992)

    Google Scholar 

  8. Holland, J.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, Mich. (1975)

    Google Scholar 

  9. Kuri, A.: A Comprehensive Approach to Genetic Algorithms in Optimization and Learning, Theory and Applications Foundations, vol. 1, pp. 215–219, Ed. Politécnico (1999)

    Google Scholar 

  10. Yao, X.: A Review of Evolutionary Artificial Neural Networks. International Journal of Intelligent Systems 8, 539–567 (1993)

    Article  Google Scholar 

  11. van Rooij, A.J.F., Jain, J.C., Johnson, R.P.: Neural Network Training using Genetic Algorithms. World Scientific, Singapore (1996)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kuri Morales, A. (2000). Training Neural Networks Using Non-standard Norms – Preliminary Results. In: Cairó, O., Sucar, L.E., Cantu, F.J. (eds) MICAI 2000: Advances in Artificial Intelligence. MICAI 2000. Lecture Notes in Computer Science(), vol 1793. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10720076_32

Download citation

  • DOI: https://doi.org/10.1007/10720076_32

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67354-5

  • Online ISBN: 978-3-540-45562-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics