Skip to main content

Statistical Learning by Natural Gradient Descent

  • Chapter
New Learning Paradigms in Soft Computing

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 84))

  • 389 Accesses

Abstract

Based on stochastic perceptron models and statistical inference, we train single-layer and two-layer perceptrons by natural gradient descent. We have discovered an efficient scheme to represent the Fisher information matrix of a stochastic two-layer perceptron. Based on this scheme, we have designed an algorithm to compute the natural gradient. When the input dimension n is much larger than the number of hidden neurons, the complexity of this algorithm is of order O(n). It is confirmed by simulations that the natural gradient descent learning rule is not only efficient but also robust.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Amari, S. (1997), “Neural learning in structured parameter spaces–natural Riemannian gradient,;” in Mozer, M.C., Jordan, M.I., and Petsche, T. (Eds.), Advances in Neural Information Processing Systems, 9th ed., MIT Press, Cambridge, MA., pp. 127–133.

    Google Scholar 

  2. Amari, S. (1998), “Natural gradient works efficiently in learning,” Neural Computation, vol. 10, pp. 251–276.

    Article  Google Scholar 

  3. Amari, S., Park, H., and Fukumizu, K. (1998), “Adaptive method of realizing natural gradient learning,” Technical Report 1953, RIKEN Brain Science Institute.

    Google Scholar 

  4. Cardoso, J.-F. and Laheld, B. (1996), “Equivariant adaptive source separation,” IEEE Trans. on Signal Processing, vol. 44, no. 12, pp. 3017–3030, December.

    Google Scholar 

  5. Darken, C. and Moody, J. (1992), “Towards faster stochastic gradient search,” in Moody, Hanson, and Lippmann (Eds.), Advances in Neural Information Processing Systems, 4th eds., Morgan Kaufmann, San Mateo, pp. 1009–1016.

    Google Scholar 

  6. Saad, D. and Solla, S.A. (1995), “On-line learning in soft committee machines,” Physical Review E, vol. 52, pp. 4225–4243.

    Article  Google Scholar 

  7. Stewart, G.W. (1973), Introduction to Matrix Computations, New York, Academic Press.

    MATH  Google Scholar 

  8. Stewart, W.J. (1994), Introduction to the Numerical Solution of Markov Chains, Princeton University Press.

    Google Scholar 

  9. Stuart, A. and Ord, J.K. (1994), Kendall’s Advanced Theory of Statistics,Edward Arnold.

    Google Scholar 

  10. Yang, H.H. and Amari, S. (1997), “Training multi-layer perceptrons by natural gradient descent,” ICONIP’97 Proceedings, New Zealand.

    Google Scholar 

  11. Yang, H.H. and Amari, S. (1998), “Complexity issues in natural gradient descent method for training multi-layer perceptrons,” Neural Computation, vol. 10, pp. 2137–2157.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Yang, H.H., Amari, S. (2002). Statistical Learning by Natural Gradient Descent. In: Jain, L.C., Kacprzyk, J. (eds) New Learning Paradigms in Soft Computing. Studies in Fuzziness and Soft Computing, vol 84. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1803-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-7908-1803-1_1

  • Publisher Name: Physica, Heidelberg

  • Print ISBN: 978-3-7908-2499-5

  • Online ISBN: 978-3-7908-1803-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics