Elimination of a Catastrophic Destruction of a Memory in the Hopfield Model
For the standard Hopfield model a catastrophic destruction of the memory has place when the last is overfull (so called catastrophic forgetting). We eliminate the catastrophic forgetting assigning different weights to input patterns. As the weights one can use the frequencies of appearance of the patterns during the learning process. We show that only patterns whose weights are larger than some critical weight would be recognized. The case of the weights that are the terms of a geometric series is studied in details. The theoretical results are in good agreement with computer simulations.
KeywordsHopfield model catastrophic forgetting quasi-Hebbian matrix
Unable to display preview. Download preview PDF.
- 2.Hertz, J., Krogh, A., Palmer, R.: Introduction to the Theory of Neural Computation. Addison-Wesley, Massachusetts (1991)Google Scholar
- 3.Parisi, G.: A memory which forgets. Journal of Physics A 19, L617–L620 (1986)Google Scholar
- 6.Sandberg, A., Ekeberg, O., Lansner, A.: An incremental Bayesian learning rule for palimpsest memory (preprint)Google Scholar
- 8.Karandashev, I., Kryzhanovsky, B., Litinskii, L.: Weighted Patterns as a Tool for Improving the Hopfield Model. Phys. Rev. E 85, 041925 (2012)Google Scholar
- 10.van Hemmen, J.L., Kuhn, R.: Collective Phenomena in Neural Networks. In: Domany, E., van Hemmen, J.L., Shulten, K. (eds.) Models of Neural Networks, pp. 1–105. Springer, Berlin (1992)Google Scholar