Skip to main content

Stochastic approximation techniques and circuits and systems associated tools for neural network optimization

  • Plasticity Phenomena (Maturing, Learning and Memory)
  • Conference paper
  • First Online:
Biological and Artificial Computation: From Neuroscience to Technology (IWANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1240))

Included in the following conference series:

  • 75 Accesses

Abstract

This paper is devoted to the optimization of feedforward and feedback Artificial Neural Networks (ANN) working in supervised learning mode. We describe in a general way how it is possible to derive first and second order stochastic approximation methods that provide learning capabilities. We show how certain variables, the sensitivities of the ANN outputs, play a key role in the ANN optimization process. Then we describe how some useful and elementary tools known in circuit theory can be used to compute these sensitivities with a low computational cost. We show by example how to apply these two sets of complementary tools, i.e. stochastic approximation and sensitivity theory.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L. Ljung and T. Söderström, Theory and Practice of Recursive Identification. The MIT Press, Camdbrige, Massachusetts, 1987. December 1968.

    Google Scholar 

  2. R. K. Brayton and R. Spence, Sensitivity and Optimization. Elsevier Scientific Publishing Company, Amsterdam, 1980.

    Google Scholar 

  3. C. M. Bishop, Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995.

    Google Scholar 

  4. A. Cichocki, R. Unbehaen, Neural Networks for Optimization and Signal Processing. J. Wiley, Chichester, 1993.

    Google Scholar 

  5. H. Dedieu, C. Dehollain, J. Neirynck, G. Rhodes, A New Method for Solving Broadband Matching Problems. IEEE Trans. on Circuits and Systems, Systems-I: Fundamental Theory and Applications, Vol. 41, NO. 9, pp. 561–571, Sept. 1994.

    Google Scholar 

  6. H. Dedieu, O. Chételat, Automatic Derivation of Adaptive Algorithms for a Large Class of Filter Structures. IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP'93, Minneapolis, April 27–30, 1993, pp. III.476–III.479

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Roberto Moreno-Díaz Joan Cabestany

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dedieu, H., Flanagan, A., Robert, A. (1997). Stochastic approximation techniques and circuits and systems associated tools for neural network optimization. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032503

Download citation

  • DOI: https://doi.org/10.1007/BFb0032503

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63047-0

  • Online ISBN: 978-3-540-69074-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics