Statistical Learning by Natural Gradient Descent
Based on stochastic perceptron models and statistical inference, we train single-layer and two-layer perceptrons by natural gradient descent. We have discovered an efficient scheme to represent the Fisher information matrix of a stochastic two-layer perceptron. Based on this scheme, we have designed an algorithm to compute the natural gradient. When the input dimension n is much larger than the number of hidden neurons, the complexity of this algorithm is of order O(n). It is confirmed by simulations that the natural gradient descent learning rule is not only efficient but also robust.
KeywordsWeight Vector Hide Neuron Fisher Information Matrix Conjugate Gradient Algorithm Input Dimension
Unable to display preview. Download preview PDF.
- 1.Amari, S. (1997), “Neural learning in structured parameter spaces–natural Riemannian gradient,;” in Mozer, M.C., Jordan, M.I., and Petsche, T. (Eds.), Advances in Neural Information Processing Systems, 9th ed., MIT Press, Cambridge, MA., pp. 127–133.Google Scholar
- 3.Amari, S., Park, H., and Fukumizu, K. (1998), “Adaptive method of realizing natural gradient learning,” Technical Report 1953, RIKEN Brain Science Institute.Google Scholar
- 4.Cardoso, J.-F. and Laheld, B. (1996), “Equivariant adaptive source separation,” IEEE Trans. on Signal Processing, vol. 44, no. 12, pp. 3017–3030, December.Google Scholar
- 5.Darken, C. and Moody, J. (1992), “Towards faster stochastic gradient search,” in Moody, Hanson, and Lippmann (Eds.), Advances in Neural Information Processing Systems, 4th eds., Morgan Kaufmann, San Mateo, pp. 1009–1016.Google Scholar
- 8.Stewart, W.J. (1994), Introduction to the Numerical Solution of Markov Chains, Princeton University Press.Google Scholar
- 9.Stuart, A. and Ord, J.K. (1994), Kendall’s Advanced Theory of Statistics,Edward Arnold.Google Scholar
- 10.Yang, H.H. and Amari, S. (1997), “Training multi-layer perceptrons by natural gradient descent,” ICONIP’97 Proceedings, New Zealand.Google Scholar