Encyclopedia of Computational Neuroscience

Living Edition
| Editors: Dieter Jaeger, Ranu Jung

MIIND: A Population-Level Neural Simulator Incorporating Stochastic Point Neuron Models

  • Marc de KampsEmail author
  • Hugh Osborne
  • Lukas Deutz
  • Frank van der Velde
  • Mikkel Lepperød
  • Yi Ming Lai
  • David Sichau
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4614-7320-6_100680-1


MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a neural simulator that allows the creation of large-scale neuronal networks at the population level. Populations of neurons are considered to be homogeneous and comprised of point model neurons. MIIND does not simulate individual neurons but considers their distribution over the model neuron’s state space in terms of a density function and models the evolution of this density function in response to input from other neural populations or external input. From the density function, other quantities can be calculated, such as the population’s firing rate. This rate, in turn, can influence other populations. Because populations interact through firing rates rather than individual spikes, the simulation of networks of spiking neurons becomes easier as no events need to be buffered. Using an XML format, it is easy to configure large-scale network simulations. MIIND is implemented as a C++ package but has a Python interface so that simulations can be driven and results can be analyzed in Python. It has been used in the simulation of networks of language representation and spinal cord circuits.

Detailed Description

MIIND (de Kamps et al. 2008) is a simulation package that can be driven from Python through an XML file. It can simulate complex circuits of neural populations. It stands out from other simulation packages in that it (i) models directly at the population level; (ii) uses population density techniques (PDTs) to model the state and firing rate response of the population; and (iii) offers the capability to incorporate existing and novel two-dimensional (2D) point model neurons into its framework.

Most population-level simulations described in the literature use rate-based modeling: a population is characterized by a single variable, such as firing rate or average membrane potential. A network simulation based on these methods usually results from the solution of a coupled set of first-order ordinary differential equations. A prominent example of such a system are the Wilson-Cowan equations (Wilson and Cowan 1972). There are efficient solution methods for large coupled networks, and their use is appropriate when considering quantities that are only indirectly related to the neural substrate, for example, in modeling imaging techniques where they have found widespread application.

Sometimes reducing the state of a population to a single variable is too restrictive and leads to the wrong simulation results (Montbrió et al. 2015). One option is to perform spike-based simulations, using neural simulators such as NEST or BRIAN (we will not discuss compartmental modeling here). When applied to large-scale networks, these simulations can become unwieldy, in terms of the number of parameters required or resources (CPU, memory) consumed. A good compromise is to use population density techniques (PDTs). PDTs are based on point model neurons, such as leaky integrate-and-fire (LIF), which are one dimensional as the only variable characterizing their state is the membrane potential. Richer models usually have more variables defining the neuronal state, e.g., the FitzHugh-Nagumo model has two and is therefore referred to as 2D, and so does the adaptive-exponential-integrate-and fire (AdExp), while the Hodgkin-Huxley model has 4. In a detailed NEST simulation, it would be possible to follow individual neurons as they move through state space, driven by their own dynamics, as well as by the effects of incoming spikes from other neurons. A large population can then be imagined as a cloud of points in the state space of the model neuron if this population is assumed to be homogeneous – and the neurons have the same state space. An example is given in Fig. 1, which tracks the state of individual AdExp neurons over time. Although the individual neurons, represented by green points, are kicked around state space by incoming spikes and their state is unpredictable as these incoming spikes are modeled as random events, the behavior of the entire cloud of neurons is far more predictable, so much so that one can define a density function that represents the probability of finding a given neuron in state space. Using statistical techniques, one can model the evolution of this density function directly, without the need for keeping track of individual neurons, purely based on the dynamics of the point model neuron and the characteristics of the statistics governing spike trains that arrive at the neuron. A good introduction to PDTs can be found in Omurtag et al. (2000).
Fig. 1

(a) The state space of AdExp neurons. (b) A detail. (ce) The evolution of a group of AdExp neurons (green dots), as well as the density function. The density function accurately tracks the individual neurons, which were modeled by a NEST simulation. (Figure adapted from de Kamps et al. 2019)

A major advantage of PDTs is that they represent the neuronal state of the population faithfully, something that cannot be said of rate-based models. In some cases, this leads to radically different predictions for the same network connectivity. A disadvantage of PDTs is that they are computationally more expensive than rate-based models. They tend to be more efficient than direct simulations however, both for one- and two-dimensional models. For higher dimensional models, like Hodgkin-Huxley, PDTs are still computationally inefficient, as they suffer from the curse of dimensionality, although a more intelligent representation of state space may make them viable in the future. At the moment, there is a trend in computational neuroscience to reduce complex neuronal dynamics, described by high-dimensional systems to two-dimensional ones, as these retain the essence of the higher dimensional ones but are easier to visualize and analyze. 2D PDTs appear a good compromise between computational efficiency and expressivity of the neural model. Recently, we have made considerable progress in the solution of network equations for 2D PDTs (de Kamps et al. 2019).

MIIND and DIPDE (Allen Institute for Brain Science n.d.) are the only simulators, to our knowledge, that are based on PDTs. Where DIPDE is restricted to one-dimensional model (LIF), MIIND can use 2D point model neurons, as well as 1D models, such as leaky integrate-and-fire, quadratic integrate-and-fire, and others, which in the MIIND framework emerge as specializations of 2D models. The introduction of a new model MIIND is very easy: it requires a representation of the flow field of the model in terms of an ASCII file in a predefined model but does not require programming. As such it is a great tool for studying noise in 2D dynamical systems, not just neurons.

We have performed comparison studies and found that large networks of 1D neurons, such as LIF and QIF, can be simulated much faster than by simulating spikes individually. For 2D models, the run times are similar to NEST, but memory use is at least an order of magnitude lower than when simulating individual neurons. We have provided a CUDA implementation, which means that a network comprised over thousands of populations can be run on a single PC, equipped with GPU, rather than on an HPC cluster. MIIND is available at: miind.sf.net, where the code and training materials can be found.



  1. Allen Institute for Brain Science. DiPDE simulator [Internet]. Available from: https://github.com/AllenInstitute/dipd
  2. de Kamps M, Baier V, Drever J, Dietz M, Mösenlechner L, van der Velde F (2008) The state of MIIND. Neural Netw 21(8):1164–1118CrossRefGoogle Scholar
  3. de Kamps M, Lepperød M, Lai YM (2019) Computational geometry for modeling neural populations: from visualization to simulation. PLoS Comput Biol 15(3):e1006729CrossRefGoogle Scholar
  4. Montbrió E, Pazó D, Roxin A (2015) Macroscopic description for networks of spiking neurons. Phys Rev X 5(2):021028Google Scholar
  5. Omurtag A, Knight BW, Sirovich L (2000) On the simulation of large populations of neurons. J Comput Neurosci 8(1):51–63CrossRefGoogle Scholar
  6. Wilson HR, Cowan JD (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J 12(1):1–24CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  • Marc de Kamps
    • 1
    Email author
  • Hugh Osborne
    • 1
  • Lukas Deutz
    • 1
  • Frank van der Velde
    • 2
  • Mikkel Lepperød
    • 3
  • Yi Ming Lai
    • 4
  • David Sichau
    • 5
  1. 1.School of ComputingUniversity of LeedsLeedsUK
  2. 2.Technical University TwenteEnschedeThe Netherlands
  3. 3.Institute of Basic Medical Sciences, and Center for Integrative NeuroplasticityUniversity of OsloOsloNorway
  4. 4.School of MathematicsUniversity of NottinghamNottinghamUK
  5. 5.Department of Computer ScienceETH ZürichZürichSwitzerland

Section editors and affiliations

  • Padraig Gleeson
    • 1
  1. 1.Department of Neuroscience, Physiology and PharmacologyUniversity College LondonLondonUK