Combining fMRI Data and Neural Networks to Quantify Contextual Effects in the Brain
Does word meaning change according to the context? Although this hypothesis has existed for a long time, only recently it has become possible to test it based on neuroimaging. Embodiment theories of knowledge representation suggest that word meaning consist of a collection of attributes defined in terms of various neural systems. This approach represents an unlimited number of objects through weighted attributes and the weights may change in context. This paper aims at quantifying such dynamic meanings using computational modeling. A neural network is trained with backpropagation to map attribute-based representations to fMRI images of subjects reading everyday sentences. Backpropagation is then extended to the features, demonstrating how they change in different sentence contexts for the same word. Indeed, statistically significant changes occurred across similar contexts and across different subjects, quantifying for the first time how attribute weightings for the same word are modified by context. Such dynamic representations of meaning could be used in future natural language processing systems, allowing them to mirror human performance more accurately.
KeywordsContext effect Concept representations fMRI data analysis Neural networks Embodied cognition
We would like to thank Jeffery Binder (Medical College of Wisconsin), Rajeev Raizada and Andrew Anderson (University of Rochester), Mario Aguilar and Patrick Connolly (Teledyne Scientific Company) for providing this data and insight for this research. This work was supported in part by IARPA-FA8650-14-C-7357 and by NIH 1U01DC014922 grants.
- 1.Regier, T.: The Human Semantic Potential. MIT Press, Cambridge (1996)Google Scholar
- 9.Aguirre-Celis, N., Miikkulainen R.: From words to sentences & back: characterizing context-dependent meaning rep in the brain. In: Proceedings of the 39th Annual Meeting of the Cognitive Science Society, London, UK, pp. 1513–1518 (2017)Google Scholar
- 10.Glasgow, K., Roos, M., Haufler, A. J., Chevillet, M., A., Wolmetz, M.: Evaluating semantic models with word-sentence relatedness. arXiv:1603.07253 (2016)
- 11.Anderson, A.J., et al.: Perdicting Neural activity patterns associated with sentences using neurobiologically motivated model of semantic representation. Cereb. Cortex 1–17 (2016). https://doi.org/10.1093/cercor/bhw240
- 14.Vinyals, O., Toshev, A., Bengio, S., Erham, D.: Show and tell: a new image caption generator. arXiv:1506.03134v2 (2015)
- 17.Anderson, A.J., et al.: Multiple regions of a cortical network commonly encode the meaning of words in multiple grammatical positions of read sentences. Cereb. Cortex 1–16 (2018). https://doi.org/10.1093/cercor/bhy110