Mutual Information Firing Rate Probability Distribution Function Model Neuron Conditional Probability Distribution
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Information theory has been extensively applied to neuroscience problems. The mutual information between input and output has been postulated as an objective, which neuronal systems may optimize. However, only recently the energy efficiency has been addressed within an information-theoretic framework . Here, the key idea is to consider capacity per unit cost (measured in bits per joule, bpj) as the objective. We are interested in how biologically plausible constraints affect predictions made by this new theory for bpj-maximizing model neurons.
More specifically, in our contribution, in line with  and , a neuron is modeled as a memory-less constant communication channel with a Gamma conditional probability distribution function (PDF) . In this setting, the channel input and output are the excitatory postsynaptic potential intensity, , and the inter spike interval (ISI), , with PDFs and , respectively. We then formulate two new constraints: First, we impose a lower bound on the duration of ISIs. The rational for this is to account for a maximal firing rate. Second, we consider a peak energy expenditure constraint per ISI as compared to only bounding the expected energy expenditure. This translates into an upper bound on the ISI duration. We then derive the (corresponding to valid ) of a bpj-maximizing neuron for the original unconstrained setting from  and in the presence of the above two constraints for different expected ISIs. (Details omitted here for brevity.) Figure 1 shows three s obtained in the unconstrained (dashed curves) and constrained settings (solid curves) for and . While the constrained and unconstrained solutions have the same mean, the shape of their differ. For comparison with experimental data, we computed the coefficient of variation (CV) as a function of the mean ISI as an "observable" (Figure 2), which is easier to measure experimentally than the full distribution . Interestingly, the CV is predicted i) to be lower in the constrained setting, and ii) to increase and then decrease with the mean ISI while it only decreases in the unconstrained setting. Thus, we demonstrated that constraints can affect predictions based on bpj-maximization, and should be explicitly taken into account. Ongoing work makes these predictions more quantitative via simulating biophysically realistic model neurons.
This research has been supported in part by the DAAD (German-Arabic/Iranian Higher Education Dialogue).
Berger T, Levy WB: A Mathematical Theory of Energy Efficient Neural Computation and Communication. IEEE Trans on Information Theory. 2010, 56 (2): 852-874.CrossRefGoogle Scholar
Xing J, Berger T, Sejnowski TJ: A Berger-Levy energy efficient neuron model with unequal synaptic weights. Proc of IEEE Int Symp on Information Theory. 2012, 2964-2968.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.