MIIND: A Population-Level Neural Simulator Incorporating Stochastic Point Neuron Models
- 105 Downloads
MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a neural simulator that allows the creation of large-scale neuronal networks at the population level. Populations of neurons are considered to be homogeneous and comprised of point model neurons. MIIND does not simulate individual neurons but considers their distribution over the model neuron’s state space in terms of a density function and models the evolution of this density function in response to input from other neural populations or external input. From the density function, other quantities can be calculated, such as the population’s firing rate. This rate, in turn, can influence other populations. Because populations interact through firing rates rather than individual spikes, the simulation of networks of spiking neurons becomes easier as no events need to be buffered. Using an XML format, it is easy to configure large-scale network simulations. MIIND is implemented as a C++ package but has a Python interface so that simulations can be driven and results can be analyzed in Python. It has been used in the simulation of networks of language representation and spinal cord circuits.
MIIND (de Kamps et al. 2008) is a simulation package that can be driven from Python through an XML file. It can simulate complex circuits of neural populations. It stands out from other simulation packages in that it (i) models directly at the population level; (ii) uses population density techniques (PDTs) to model the state and firing rate response of the population; and (iii) offers the capability to incorporate existing and novel two-dimensional (2D) point model neurons into its framework.
Most population-level simulations described in the literature use rate-based modeling: a population is characterized by a single variable, such as firing rate or average membrane potential. A network simulation based on these methods usually results from the solution of a coupled set of first-order ordinary differential equations. A prominent example of such a system are the Wilson-Cowan equations (Wilson and Cowan 1972). There are efficient solution methods for large coupled networks, and their use is appropriate when considering quantities that are only indirectly related to the neural substrate, for example, in modeling imaging techniques where they have found widespread application.
A major advantage of PDTs is that they represent the neuronal state of the population faithfully, something that cannot be said of rate-based models. In some cases, this leads to radically different predictions for the same network connectivity. A disadvantage of PDTs is that they are computationally more expensive than rate-based models. They tend to be more efficient than direct simulations however, both for one- and two-dimensional models. For higher dimensional models, like Hodgkin-Huxley, PDTs are still computationally inefficient, as they suffer from the curse of dimensionality, although a more intelligent representation of state space may make them viable in the future. At the moment, there is a trend in computational neuroscience to reduce complex neuronal dynamics, described by high-dimensional systems to two-dimensional ones, as these retain the essence of the higher dimensional ones but are easier to visualize and analyze. 2D PDTs appear a good compromise between computational efficiency and expressivity of the neural model. Recently, we have made considerable progress in the solution of network equations for 2D PDTs (de Kamps et al. 2019).
MIIND and DIPDE (Allen Institute for Brain Science n.d.) are the only simulators, to our knowledge, that are based on PDTs. Where DIPDE is restricted to one-dimensional model (LIF), MIIND can use 2D point model neurons, as well as 1D models, such as leaky integrate-and-fire, quadratic integrate-and-fire, and others, which in the MIIND framework emerge as specializations of 2D models. The introduction of a new model MIIND is very easy: it requires a representation of the flow field of the model in terms of an ASCII file in a predefined model but does not require programming. As such it is a great tool for studying noise in 2D dynamical systems, not just neurons.
We have performed comparison studies and found that large networks of 1D neurons, such as LIF and QIF, can be simulated much faster than by simulating spikes individually. For 2D models, the run times are similar to NEST, but memory use is at least an order of magnitude lower than when simulating individual neurons. We have provided a CUDA implementation, which means that a network comprised over thousands of populations can be run on a single PC, equipped with GPU, rather than on an HPC cluster. MIIND is available at: miind.sf.net, where the code and training materials can be found.