Advertisement

Statistical Learning by Natural Gradient Descent

  • H. H. Yang
  • S. Amari
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 84)

Abstract

Based on stochastic perceptron models and statistical inference, we train single-layer and two-layer perceptrons by natural gradient descent. We have discovered an efficient scheme to represent the Fisher information matrix of a stochastic two-layer perceptron. Based on this scheme, we have designed an algorithm to compute the natural gradient. When the input dimension n is much larger than the number of hidden neurons, the complexity of this algorithm is of order O(n). It is confirmed by simulations that the natural gradient descent learning rule is not only efficient but also robust.

Keywords

Weight Vector Hide Neuron Fisher Information Matrix Conjugate Gradient Algorithm Input Dimension 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amari, S. (1997), “Neural learning in structured parameter spaces–natural Riemannian gradient,;” in Mozer, M.C., Jordan, M.I., and Petsche, T. (Eds.), Advances in Neural Information Processing Systems, 9th ed., MIT Press, Cambridge, MA., pp. 127–133.Google Scholar
  2. 2.
    Amari, S. (1998), “Natural gradient works efficiently in learning,” Neural Computation, vol. 10, pp. 251–276.CrossRefGoogle Scholar
  3. 3.
    Amari, S., Park, H., and Fukumizu, K. (1998), “Adaptive method of realizing natural gradient learning,” Technical Report 1953, RIKEN Brain Science Institute.Google Scholar
  4. 4.
    Cardoso, J.-F. and Laheld, B. (1996), “Equivariant adaptive source separation,” IEEE Trans. on Signal Processing, vol. 44, no. 12, pp. 3017–3030, December.Google Scholar
  5. 5.
    Darken, C. and Moody, J. (1992), “Towards faster stochastic gradient search,” in Moody, Hanson, and Lippmann (Eds.), Advances in Neural Information Processing Systems, 4th eds., Morgan Kaufmann, San Mateo, pp. 1009–1016.Google Scholar
  6. 6.
    Saad, D. and Solla, S.A. (1995), “On-line learning in soft committee machines,” Physical Review E, vol. 52, pp. 4225–4243.CrossRefGoogle Scholar
  7. 7.
    Stewart, G.W. (1973), Introduction to Matrix Computations, New York, Academic Press.MATHGoogle Scholar
  8. 8.
    Stewart, W.J. (1994), Introduction to the Numerical Solution of Markov Chains, Princeton University Press.Google Scholar
  9. 9.
    Stuart, A. and Ord, J.K. (1994), Kendall’s Advanced Theory of Statistics,Edward Arnold.Google Scholar
  10. 10.
    Yang, H.H. and Amari, S. (1997), “Training multi-layer perceptrons by natural gradient descent,” ICONIP’97 Proceedings, New Zealand.Google Scholar
  11. 11.
    Yang, H.H. and Amari, S. (1998), “Complexity issues in natural gradient descent method for training multi-layer perceptrons,” Neural Computation, vol. 10, pp. 2137–2157.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • H. H. Yang
    • 1
    • 2
  • S. Amari
    • 1
    • 2
  1. 1.Department of Electrical and Computer EngineeringOregon Graduate Institute of Science and TechnologyBeavertonUSA
  2. 2.Laboratory for Information SynthesisRIKEN Brain Science InstituteWako-Shi SaitamaJapan

Personalised recommendations