Skip to main content

Linear Least-Squares Based Methods for Neural Networks Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2714))

Abstract

This paper presents two algorithms to aid the supervised learning of feedforward neural networks. Specifically, an initialization and a learning algorithm are presented. The proposed methods are based on the independent optimization of a subnetwork using linear least squares. An advantage of these methods is that the dimensionality of the effective search space for the non-linear algorithm is reduced, and therefore it decreases the number of training epochs which are required to find a good solution. The performance of the proposed methods is illustrated by simulated examples.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Battiti, R.: First and second order methods for learning: Between steepest descent and newton’s method. Neural Computation 4 (1992) 141–166

    Article  Google Scholar 

  2. Buntine, W.L., Weigend, A.S.: Computing second derivatives in feed-forward networks: A review. IEEE Trans. on Neural Networks 5 (1993) 480–488

    Article  Google Scholar 

  3. Almeida, L.B., Langlois, T., Amaral, J.D., Plakhov, A.: 6. In: Parameter adaptation in stochastic optimization. Cambridge University Press (1999) 111–134

    Google Scholar 

  4. Schraudolph, N.N.: Fast curvature matrix-vector products for second order gradient descent. Neural Computation 14 (2002) 1723–1738

    Article  MATH  Google Scholar 

  5. Nguyen, D., Widrow, B.: Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights. Proc. of the Int. Joint Conference on Neural Networks 3 (1990) 21–26

    Article  Google Scholar 

  6. Drago, G., Ridella, S.: Statistically controlled activation weight initialization (SCAWI). IEEE Trans. on Neural Networks 3 (1992) 899–905

    Article  Google Scholar 

  7. Biegler-Konig, F., Barnmann, F.: A learning algorithm for multilayered neural networks based on linear least squares problems. Neural Networks 6 (1993) 127–131

    Article  Google Scholar 

  8. Yam, Y., Chow, T.: A new method in determining the initial weights of feedforward neural networks. Neurocomputing 16 (1997) 23–32

    Article  Google Scholar 

  9. Suykens, J., Vandewalle, J., eds.: Nonlinear Modeling: advanced black-box techniques. Kluwer Academic Publishers, Boston (1998)

    Google Scholar 

  10. Moller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6 (1993) 525–533

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Fontenla-Romero, O., Erdogmus, D., Principe, J.C., Alonso-Betanzos, A., Castillo, E. (2003). Linear Least-Squares Based Methods for Neural Networks Learning. In: Kaynak, O., Alpaydin, E., Oja, E., Xu, L. (eds) Artificial Neural Networks and Neural Information Processing — ICANN/ICONIP 2003. ICANN ICONIP 2003 2003. Lecture Notes in Computer Science, vol 2714. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44989-2_11

Download citation

  • DOI: https://doi.org/10.1007/3-540-44989-2_11

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40408-8

  • Online ISBN: 978-3-540-44989-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics