Abstract
Based on stochastic perceptron models and statistical inference, we train single-layer and two-layer perceptrons by natural gradient descent. We have discovered an efficient scheme to represent the Fisher information matrix of a stochastic two-layer perceptron. Based on this scheme, we have designed an algorithm to compute the natural gradient. When the input dimension n is much larger than the number of hidden neurons, the complexity of this algorithm is of order O(n). It is confirmed by simulations that the natural gradient descent learning rule is not only efficient but also robust.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Amari, S. (1997), “Neural learning in structured parameter spaces–natural Riemannian gradient,;” in Mozer, M.C., Jordan, M.I., and Petsche, T. (Eds.), Advances in Neural Information Processing Systems, 9th ed., MIT Press, Cambridge, MA., pp. 127–133.
Amari, S. (1998), “Natural gradient works efficiently in learning,” Neural Computation, vol. 10, pp. 251–276.
Amari, S., Park, H., and Fukumizu, K. (1998), “Adaptive method of realizing natural gradient learning,” Technical Report 1953, RIKEN Brain Science Institute.
Cardoso, J.-F. and Laheld, B. (1996), “Equivariant adaptive source separation,” IEEE Trans. on Signal Processing, vol. 44, no. 12, pp. 3017–3030, December.
Darken, C. and Moody, J. (1992), “Towards faster stochastic gradient search,” in Moody, Hanson, and Lippmann (Eds.), Advances in Neural Information Processing Systems, 4th eds., Morgan Kaufmann, San Mateo, pp. 1009–1016.
Saad, D. and Solla, S.A. (1995), “On-line learning in soft committee machines,” Physical Review E, vol. 52, pp. 4225–4243.
Stewart, G.W. (1973), Introduction to Matrix Computations, New York, Academic Press.
Stewart, W.J. (1994), Introduction to the Numerical Solution of Markov Chains, Princeton University Press.
Stuart, A. and Ord, J.K. (1994), Kendall’s Advanced Theory of Statistics,Edward Arnold.
Yang, H.H. and Amari, S. (1997), “Training multi-layer perceptrons by natural gradient descent,” ICONIP’97 Proceedings, New Zealand.
Yang, H.H. and Amari, S. (1998), “Complexity issues in natural gradient descent method for training multi-layer perceptrons,” Neural Computation, vol. 10, pp. 2137–2157.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Yang, H.H., Amari, S. (2002). Statistical Learning by Natural Gradient Descent. In: Jain, L.C., Kacprzyk, J. (eds) New Learning Paradigms in Soft Computing. Studies in Fuzziness and Soft Computing, vol 84. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1803-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-7908-1803-1_1
Publisher Name: Physica, Heidelberg
Print ISBN: 978-3-7908-2499-5
Online ISBN: 978-3-7908-1803-1
eBook Packages: Springer Book Archive