A neural model of schemas and memory encoding
- 99 Downloads
The ability to rapidly assimilate new information is essential for survival in a dynamic environment. This requires experiences to be encoded alongside the contextual schemas in which they occur. Tse et al. (Science 316(5821):76–82, 2007) showed that new information matching a preexisting schema is learned rapidly. To better understand the neurobiological mechanisms for creating and maintaining schemas, we constructed a biologically plausible neural network to learn context in a spatial memory task. Our model suggests that this occurs through two processing streams of indexing and representation, in which the medial prefrontal cortex and hippocampus work together to index cortical activity. Additionally, our study shows how neuromodulation contributes to rapid encoding within consistent schemas. The level of abstraction of our model further provides a basis for creating context-dependent memories while preventing catastrophic forgetting in artificial neural networks.
KeywordsMemory consolidation Schemas Catastrophic forgetting Spatial navigation
We thank the participants of the 2017 Telluride Neuromorphic Cognition Workshop, especially Xinyun Zou, Brent Komer, Georgios Detorakis, and Scott Koziol, who worked on a preliminary project leading to the creation of this model.
- Detorakis G, Bartley T, Neftci E (2018) Contrastive hebbian learning with random feedback weights. arXiv preprint arXiv:1806.07406
- Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu AA, Milan K, Quan J, Ramalho T, Grabska-Barwinska A, et al (2017) Overcoming catastrophic forgetting in neural networks. In: Proceedings of the national academy of sciences, p 201611835Google Scholar
- Masse NY, Grant GD, Freedman DJ (2018) Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. arXiv preprint arXiv:1802.01569
- Movellan JR (1991) Contrastive Hebbian learning in the continuous Hopfield model. In: Connectionist models. Elsevier, pp 10–17Google Scholar
- Nakano S, Hattori M (2017) Reduction of catastrophic forgetting in multilayer neural networks trained by contrastive Hebbian learning with pseudorehearsal. In: 2017 IEEE 10th International Workshop on computational intelligence and applications (IWCIA). IEEE, pp 91–95Google Scholar
- Otmakhova N, Duzel E, Deutch AY, Lisman J (2013) The hippocampal-VTA loop: the role of novelty and motivation in controlling the entry of information into long-term memory. In: Intrinsically motivated learning in natural and artificial systems. Springer, pp 235–254Google Scholar
- Soltoggio A, Stanley KO, Risi S (2017) Born to learn: the inspiration, progress, and future of evolved plastic artificial neural networks. arXiv preprint arXiv:1703.10371