Projection Learning and Graceful Degradation
We have presented in (Weigl et al. 1992 - 1993 b) a paradigm in which we consider neural networks such as Multi-layer Perceptrons as bases in a function space; the basis functions are the functions computed by the hidden layer neurons, and the function approximated by the network is the projection of the function to be approximated onto the manifold spanned by these basis functions. We have presented a learning algorithm based on that paradigm, which consists in shifting the manifold spanned by that base in function space in such a way that the distance to the function to be approximated is minimal.
KeywordsHide Layer Function Space Projection Operator Output Neuron Layer Neuron
Unable to display preview. Download preview PDF.
- Rumelhart, D.E., McClelland, J.L., et al., (1986) Parallel Distributed Processing, Vol. 1, MIT-press.Google Scholar
- Weigl, K., and Berthod, M. (1992) Metric Tensors and Dynamical NonOrthogonal Bases: An Application to Function Approximation. Proc. WOPPLOT 1992, Workshop on Parallel Processing: Logic, Organization and Technology, Springer Lecture Notes in Computer Sciences, to be published.Google Scholar
- Weigl, K., and Berthod, M., (1993 a) Non-orthogonal Bases and Metric Tensors: An Application to Artificial Neural Networks. New Trends in Neural Computation, Proc. IWANN’93, International Workshop on Artificial Neural Networks, Springer Lecture Notes in Computer Sciences, vol. 686, 173–178.Google Scholar
- Weigl, K., and Berthod, M., (1993 b) Neural Networks as Dynamical Bases in Function Space, Research Report INRIA no. 2124, 1–40.Google Scholar
- Weigl, K., and Berthod, M., (1993 c) Some Remarks about Boundary Creation by Multi-layer Perceptrons, submitted to WCNN 94Google Scholar