Recruitment vs. Backpropagation Learning: An empirical study on re-learning in connectionist networks
This paper describes a first comparison between two connectionist learning techniques: backpropagation and recruitment learning. The task is to re-learn a conceptual representation, i.e. to significantly change a representation in an additional training period by the use of new data. Backpropagation denotes to a widely known, supervised learning technique which requires the repeated presentation of a set of training instances. Recruitment learning denotes to a technique which converts network units from a pool of free units into units which carry meaningful information, and can be used for both, instruction-based and similarity-based learning. It will be shown that a learning technique which makes use of structured knowledge (i.e. recruitment learning), re-learns and modifies a connectionist representation faster than backpropagation.
Unable to display preview. Download preview PDF.
- Feldman, J.A. (1982): Dynamic Connections in Neural Networks. Biol. Cybernetics, 46, 27–39.Google Scholar
- Rumelhart, D.E., Hinton, G.E. & Williams, R.J. (1986): Learning Internal Representations by Error Propagation. In: Rumelhart, D.E. & McClelland, J.L. (Eds.): Parallel Distributed Processing. Vol 1.: Foundations. The MIT Press, Cambridge, Mass.Google Scholar
- Rumelhart, D.E. (1990): Brain Style Computation: Learning and Generalization. In: Zornetzer (Ed.): An Introduction to Neural and Electronic Networks, 405–420. Academic Press, N.Y.Google Scholar