Abstract
We have undertaken the study of methods for avoiding catastrophic forgetting in feedforward neural networks, without sacrifying the benefits of distributed representations. We formalize the problem as the minimization of the error over the previously learned input-output (i–o) patterns, subject to the constraint of perfect encoding of the new pattern. Then we transform this constrained optimization problem into an unconstrained one. This new formulation naturally leads to an algorithm for solving the problem, wihch we call Minimally Disturbing Learning (MDL). Some experimental comparisons of the performance of MDL with back-propagation are provided which, besides showing the advantages of using MDL, reveal the dependence of forgetting on the learning rate in back-propagation.
Preview
Unable to display preview. Download preview PDF.
References
S. Becker, Y. Le Cun. Improving the convergence of Back-propagation learning with second order methods. In Touretzky, D., Hinton, F., and Sejnowski, T., editors. Proc. of the 1988 Connectionist Models Summer School, pp. 29–37, San Mateo. Morgan Kuffman.
R.M. French. Using semi-distributed representations to overcome catastrophic forgetting in connectionist networks. CRCC Technical Report 51-1991. Center for Research on Concepts and Cognition. Indiana University.
T. Grossman, R. Meir, E. Domany. Learning by choice of internal representations. Complex Systems 2 (1988) 555–575.
G.E. Hinton and T.J. Sejnowski. Learning and Relearning in Boltzman machines. In D.E. Rumelhart & J.L. McCLelland, Parallel distributed processing: Explorations in the microstructure of cognition. Vol 1: Foundations. Cambridge, MA: MIT press.
E. D. Karnin. A simple procedure for pruning Back-propagation trained Neural Networks. IEEE Transactions on Neural Networks Vol 1. No 2, June 1990. Neural Information Processing Systems. Ed. by David S. Touretzky, 1990 Morgan Kauffman Publishers.
A. Krogh, C.J. Thorbergsson and J.A. Hertz. A cost function for internal representations.
Y. Le Cun, J. S. Denker and S. A. Solla. Optimal Brain Damage. In Advances in Neural Information Processing Systems. Ed. by David S. Touretzky, 1990 Morgan Kauffman Publishers.
R. Ratcliff. Connectionist models of recognition memory: constraints imposed by learning and for getting functions. Psychological Review 1990 Vol. 97 No. 2, 235–308.
R. Rohwer.The moving target training algorithm. In Advances in Neural Information Processing Systems. Ed. by David S. Touretzky, 1990 Morgan Kauffman Publishers.
D. E. Rumelhart, G. E. Hinton, R.J. Williams. Learning internal representations by error propagation. In D.E. Rumelhart & J.L. McCLelland, Parallel distributed processing: Explorations in the microstructure of cognition. Vol 1: Foundations. Cambridge, MA: MIT press.
Relaxation and Neural Learning:Points of Convergence and Divergence. Journal of Parallel and Distributed Computing 6, pp. 217–244.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1991 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ruiz de Angulo, V., Torras, C. (1991). Minimally disturbing learning. In: Prieto, A. (eds) Artificial Neural Networks. IWANN 1991. Lecture Notes in Computer Science, vol 540. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0035891
Download citation
DOI: https://doi.org/10.1007/BFb0035891
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-54537-8
Online ISBN: 978-3-540-38460-1
eBook Packages: Springer Book Archive