Hippocampal formation as unitary coherent particle filter
- 765 Downloads
KeywordsHide Markov Model Dentate Gyrus Localization Model Particle Filter Entorhinal Cortex
A standard approach to localization, recall and prediction from sensors is the Hidden Markov Model (HMM, ), with location as hidden state and sense data as observations. Assuming that hippocampus performs a similar task, we present a new top-down mapping of this function onto its anatomy. In localization models of CA3 , firing rates of individual place cells encode probabilities over current location and recurrent connections may represent transitions probabilities between them. In auto-associative models , recurrent connections bring the network into a vector-coded memorized state. The existence of cells encoding rewards and locations of external objects supports this view .
Consider localization and auto-associative models in the light of HMMs, which represent probabilities over the complete set of discrete hidden states, as in localization models. However, inferences about high-level sensory data, including locations of external objects and rewards, require that the structure of these states be complex: for example, a state could include conjunctions of the agents' own location and heading with the configurations of many external objects, rewards, even the agent's own actions. Such states are best represented by configurations of large numbers of Boolean variables, playing a similar role to auto-associative model nodes.
We present a resolution between localization and auto-association models. HMMs become intractable for complex states and it is common to approximate them by particle filters (PFs, ). PFs approximate posteriors by samples. In particular, a "unitary" particle filter maintains just a single sample. CA3 could unitarily sample posteriors from a complex state space containing Boolean variables coding presence or absence of objects at locations.
A standard PF problem is getting "lost:" when the samples no longer include the true state, it becomes difficult for inference to return there even if strongly suggested by the data. To recover, a useful heuristic  is to monitor matches between predictions and input, and flatten the prior over hidden states when during prolonged poor matches. We model subiculum performing such monitoring, and its resultant septal ACh flattening the transition prior in CA3. This contrasts with ACh models  that disable CA3 recurrency for learning rather than inference.
This article is published under license to BioMed Central Ltd.