Encyclopedia of Computational Neuroscience

Living Edition
| Editors: Dieter Jaeger, Ranu Jung

Embodied Cognition, Dynamic Field Theory of

  • Gregor SchönerEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4614-7320-6_55-1


Tuning Curve Selection Decision Local Input Attractor Solution Neural Interaction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



The insight that cognition is grounded in sensorimotor processing and shares many properties with motor control, captured by the notion of “embodied cognition,” has been a starting point for neural process models of cognition. Neural Field models represent spaces relevant to cognition, including physical space, perceptual feature spaces, or movement parameters in activation fields that may receive input from the sensory surfaces and may project onto motor systems. Peaks of activation are units of representation. Their positive levels of activation indicate the instantiation of a representation, while their location specifies metric values along the feature dimensions. By ensuring that peaks are stable states (attractors) of a neural activation dynamics, cognitive processes are endowed with the stability properties required when cognition is linked to sensory and motor processes. Instantiations of cognitive processes arise from instabilities that may induce peaks and suppress. Such events may represent detection, selection, or classification decisions. Neural Field models account for classical behavioral signatures of cognition including response times, error rates, and metric estimation biases but also link to neurophysiological correlates of behavior like patterns of population activation and their temporal evolution. Robotic demonstrations of Neural Field models are used to establish the capacity of these models to provide process accounts of cognition that may link to real sensory information and generate real movement in the physical environments.

Detailed Description

Elementary forms of cognition are the detection and selection decisions that control attention and eye movements but are also the basis for object perception. Committing detected perceptual states into working memory and then long-term memory is a key element of cognition. Serially organized sequences of cognitive states are the basis for cognitive processes. Motor actions require that the initiation, termination, and potentially the online update of planned movements be autonomously generated. Neural Field models of cognition and control provide a neural account for such elementary cognition based on four elements: the spaces that such cognitive processes are about, the activation fields defined over these spaces within which neural representations can be created, the neural activation dynamics that drive neural representations forward in time, and the instabilities that give rise to the elementary forms of cognition. This use of Neural Field models to account for cognition and its sensorimotor grounding has been called Dynamic Field Theory (DFT). DFT is a mathematically formalized conceptual framework for understanding embodied cognition that is linked to neural process modeling but abstracts from some of the specific anatomical and biophysical details of neurophysiology to enable a close link to behavior (Schneegans and Schöner 2008).


That cognition is grounded in sensorimotor processes is a central insight that the embodiment perspective on cognition emphasizes (Riegler 2002). In this view, cognition is about states of the world that may become linked to cognition through perception or action, but are not dictated by perception and action alone. Even mental imagery, for instance, shares the sensorimotor grounding, though it is decoupled in the moment from actual sensorimotor processes. Neural accounts for cognition must, therefore, take into account how sensory information may potentially drive cognition, primarily through the sensory cortical and subcortical neural structures. Such accounts must also take into account how motor states are driven from cortical and subcortical structures. Common to both domains of sensation and movement is the observation that they are characterized by continuous dimensions: there are continua of possible percepts and continua of possible motor actions. For instance, the possible percepts of a single moving object form a continuum that may be spanned by the retinal location of the object, the direction of motion, perhaps its speed, or motion in depth. The object may have a color that may vary continuously in hue space, have characteristic texture that may vary along a spatial frequency dimension, and may have a surface curvature that varies continuously. Figure 1 illustrates three of these dimensions. Movements similarly form continua, spanned by movement parameters such as the movement direction, extent, and peak velocity of the end effector in a body-centered reference frame. Much of cognition is embedded in the physical space that surrounds our body, even when it takes such flexible form as positioning a thought at a location in space through a hand gesture. Categories are embedded in feature spaces. While a dog is categorically different form a car, both may be morphed into their many different instantiations. Even superordinate categories may be embedded in feature spaces by combining the feature dimensions of hierarchically organized attributes (McClelland and Rogers 2003).
Fig. 1

Left: The possible percepts of a single moving object (filled circle moving as indicated by the arrow) may be spanned by continuous dimensions such as the location of the moving object in the visual array and the direction of motion. Right: A neural activation field defined over these dimensions (only two shown here) represents a motion percept as a peak of activation positioned over the location that specifies the seen motion

Neural Fields

Neural Field models of embodied cognition represent the state outside the nervous system by neural activation patterns that are inside the nervous system. Neural activation, as used in Dynamic Field Theory, is a real number, u, that may both be positive or negative. The link to biophysically detailed accounts of neural activity can be established in multiple different ways (see entries “Neural Field Model, Continuum”; “Neural Population Models and Cortical Field Theory: Overview”). Critical for the link to behavior is the assumption that only sufficiently positive levels of activation impact on downstream structures and ultimately on motor systems. This is expressed mathematically through a sigmoidal function, g(u), often chosen as 1/(1 + exp(−βu)), where β is the steepness of this nonlinearity.

A Neural Field is a continuum of such activation variables, one for each location of the represented space. For instance, in Fig. 1, a level of activation represents each possible horizontal position and motion direction of a moving object. The field notion is thus analogous to how fields are used in physics. Peaks of activation are units of representation. Their positive levels of activation imply that they impact on downstream processes. Their location specifies the represented state.

How does a location in a Neural Field acquire the meaning ascribed to it in this interpretation? It is ultimately the connectivity to the sensory or the motor surfaces that determines what a field location “stands for.” In perceptual representations such as the one illustrated in Fig. 1, this would be, for instance, the connectivity from the retina, through simple and complex cells, to motion detectors, anatomically probably located in area MT of the cortex. Field locations thus have a “tuning curve.” Similarly, the forward projection onto the motor system implies a specificity that could be interpreted as a tuning curve for movement parameters. We know that in the cortex as well as in such subcortical structures as the colliculus and thalamus, tuning curves tend to be broad and overlapping, an indication that broad populations of neurons are activated for any individual perceptual or motor state represented. This fact together with detailed analysis of how strongly all activated neurons contribute to a percept or motor action has given rise to the hypothesis that the activity of small populations of neurons is the best neural correlate of behavior (Cohen and Newsome 2009). DFT is based on the further hypothesis that Neural Fields represent the activity of such small populations of neurons in the higher nervous system that are tuned to particular sensory or motor states. In fact, it is possible to estimate Neural Fields from recorded population activity (Erlhagen et al. 1999). This link to population activity frees Neural Fields from the more literal interpretation prevalent in some neural modeling, in which the activation fields are directly defined over the cortical surface. Instead, the Neural Fields on which DFT is based are organized in terms of the topology of the outer space that is being represented. The two interpretations are aligned where cortical maps are topographic. In other cases, however, topography is violated, such as for the tuning of neurons in motor cortex to the direction of a planned hand movement. The Neural Fields of DFT effectively rearrange neurons so that neighboring sites always represent neighboring states. In fact, strictly speaking, neurons are smeared out across the field dimensions by contributing their entire tuning curve to the representation (Erlhagen et al. 1999).

Neural Dynamics

Neural Field models construe time as continuous to approximate at the population level the discrete, but asynchronous spiking events that individual neurons contribute. Mathematically, the evolution in time of the activation state of a Neural Field is described, therefore, by a differential equation (an integrodifferential equation in the specific formulation reviewed here). DFT postulates that peaks of activation, the meaningful macro-states of Neural Fields, are stable states and fixed-point attractors of the neural dynamics. This constrains the class of admissible dynamical models. Qualitatively, the neural interactions illustrated in Fig. 2 express these constraints. Excitatory neural coupling among neighboring field sites stabilizes peaks against decay; inhibitory coupling among field sites at longer ranges stabilizes peaks against unlimited growth. In the cortex, similarly tuned neurons are typically excitatorily coupled. Inhibitory coupling, mediated by interneurons, is also prevalent (Jancke et al. 1999).
Fig. 2

Left: A sigmoidal function, g(u), approaches zero for sufficiently negative values, and a positive constant for sufficiently positive values of activation, u. Right: Sufficiently activated sites interact excitatorily with nearby locations (green arrow), stabilizing peaks against decay, and inhibitorily with locations that are further removed (red arrow), stabilizing peaks against unlimited growth

Amari (1977) analyzed a generic Neural Field model in the limit case in which these neural interactions are dominant and showed that peaks of activation may be attractor solutions under appropriate conditions. This is why that particular model has been used in many Neural Field models of embodied cognition, becoming the workhorse of DFT. For a one-dimensional field, u(x,t), defined over a space, x, the dynamics reads
$$ \tau \dot{u}\left(x,t\right)=-u\left(x,t\right)+h+s\left(x,t\right)+{\displaystyle \int c\left(x-{x}^{\prime}\right)g\left(u\left({x}^{\prime}\right),t\right)d{x}^{\prime }}. $$

The parameter, τ, determines the overall timescale of the evolution of u(x,t). The “−u” term provides stability to the dynamics and is a reflection of the intrinsic dynamics of neural populations. The parameter h < 0 is the resting level of the field, stable in the absence of input, s(x,t). Interaction integrates over all field sites, x′. Each site contributes to the extent to which activation exceeds the threshold of the sigmoidal function g(u(x′,t)) with a coupling strength, c(xx′) that is a function of the distance between interacting field sites. For close distances, coupling is excitatory (c(small) > 0), for larger distances, inhibitory (c(large) < 0). The Amari model is a simplification over biophysically more detailed models that, among other approximations, neglects the time delays involved in synaptic transmission and lumps together excitatory and inhibitory neural populations (see entry “Neural Field Model, Continuum”).


There are two qualitatively different attractor solutions of this equation, which are separated by instabilities. In the subthreshold state, the field activation is below zero everywhere, so that interaction is not engaged. If we neglect small values of the sigmoid and assume that inputs vary only very slowly in time, this attractor solution is given by
$$ {u}_0\left(x,t\right)=h+s\left(x,t\right)<0, $$
which tracks slowly varying input, s(x,t), apart from a downward shift by h. When input is zero, s(x,t) = 0, this is the resting state of the field.
The other solution is a self-stabilized peak of activation, u p (x), whose activation level is lifted above the level specified by input, h + s(x,t), through excitatory neural interaction, while outside the peak the field is suppressed below the resting level, h, through inhibitory interaction (Fig. 3). For small levels of input, s(x,t), the subthreshold solution is monostable (top panel of Fig. 3), and for sufficiently large levels of input, the self-excited peak solution is monostable (bottom panel of Fig. 3). When input levels are increased, the subthreshold solutions become unstable in the detection instability. When input levels are again lowered, the self-stabilized solutions become unstable in the reverse detection instability. Because the reverse detection instability occurs at lower levels of input than the detection instability, the subthreshold and the self-excitatory solutions coexist bistably in a regime of intermediate levels of input strength. This bistable regime stabilizes detection decisions hysterically. This is critical, when perceptual states represented by peaks are continuously linked to sensory inputs. In the presence of noise, detection decisions must be stabilized against varying input levels to create coherent perceptual experience. Analogously, movement plans must persist when the input from perceptual or cognitive processes, which induces motor intentions, fluctuates.
Fig. 3

A Neural Field, u(x), over dimension, x, illustrated for three levels of input, s(x) (dashed green line) that increase from the top to the bottom panel. The subthreshold solution, u 0(x) (dashed blue line), is stable for the two lowest levels of input, and the self-excited peak solution, u p (x) (blue solid line), is stable for the two highest levels of input. The bistable regime is delimited at high levels of input by the detection instability and at low levels of input by the reverse detection instability

Both subthreshold and self-stabilized peak solutions are continuously linked to sensory input. The peak may track, for instance, continuously shifting local input patterns. Moreover, if input strength increases in a graded, time-continuous way, the Neural Field autonomously creates a discrete detection event when it goes through the detection instability. Similarly, if input that supports a self-stabilized peak is gradually reduced in strength, the peak collapses at a critical point through the reverse detection instability. Such discrete events emerge from continuous-time neural dynamics through the dynamic instabilities. This provides a mechanism that is critical for understanding how sequential processes may arise in neural dynamics (Sandamirskaya and Schöner 2010).

With sufficiently strong interaction or when broad inputs or a high resting level pushes activation close enough to threshold, the reverse detection instability may not be reached when the strength of a local input is reduced. In this case, a self-stabilized peak induced by local input remains stable and is sustained, even after any trace of the local input has disappeared. Such self-sustained activation peaks are the standard model of working memory (Fuster 2005). Mathematically, self-sustained peaks are marginally stable. They resist change in their shape but are not stable against shifts of the peak along the field dimensions. This leads to drift under the influence of noise or broad inputs. Such drift is psychophysically real: memory for metric information develops metric bias and increases variance over the timescale of tens of seconds (Spencer et al. 2009). Moreover, sustained peaks may be destabilized by competing inputs at other field locations. Again, this limitation of the stability of sustained peaks matches properties of working memory, which is subject to interference from new items entered into working memory.

If the inhibitory component of neural interaction is sufficiently strong and broad, the Neural Field may enact selection decisions as illustrated in Fig. 4. A self-stabilized activation peak may effectively select one of a number of locations in the underlying space that receive localized input. Typically, the most strongly stimulated site will be selected, because after input is provided, activation at that site will reach threshold earliest and begins to suppress activation at other locations before these may reach threshold and, in turn, suppress other locations. Inhibitory interaction thus translates a temporal advantage into a competitive advantage. This is the general feature of decision making in Neural Field models (Trappenberg 2008). In the limit case illustrated in Fig. 4, in which two local inputs have the exact same strength, stochastic fluctuations may bias the competition one way or the other by chance. Once a decision has been made, it is stabilized by the neural interaction within the field. As for detection decisions, this is critical when selection decisions are made, while a system is continuously linked to the sensory surface. Which site receives maximal stimulation may fluctuate in time and a simple “winner-takes-all” rule would lead to random switching among the selection choices.
Fig. 4

The attractor states, u p (x), of an activation field are shown in blue; a bimodal input, s(x), is shown in red. There are two attractor states (solid vs. dashed line), which coexist bistably. Each has a single self-stabilized peak positioned over one of the maxima of input, while activation at the other stimulated site is strongly suppressed by inhibitory interaction. The field states were obtained from simulations of the neural dynamics in which independent Gaussian white noise was added at each field location to probe the stability of the stationary states

The resistance to change is also a reason, while selection decisions are typically made in response to a transient in the input stream. Once locked into a decision, a Neural Field is not open to change, unless the differences in input strength become very large and the selection instability moves the system from a bistable or multi-stable regime to a monostable regime in which only one selection decision is stable. Such resistance to change can be observed as change blindness in which observers fail to detect a change in an image (Simons and Levin 1997). Normally, when an image changes in some location, the visual transients in the sensory input attract visual attention and help detect the change. That transient signal may be masked by turning the entire image off and then turning the locally changed image back on. Observers are blind to change when transients are masked this way, unless they happen to attend to the changed location. Because sensory inputs in the nervous system are typically transient in nature, the mechanism for making selection decisions in DFT is normally engaged by change, so that change detection in the absence of a masking stimulus may also be understood within DFT (Johnson et al. 2009).

The interplay between detection and selection instabilities brings to light a further facet of Neural Fields that provides a bridge to the emergence of categories from the underlying continuous representations. The limit case of a completely homogeneous field, in which all sites have the same dynamic properties, is an idealization, of course. It is easy to break such homogeneity, for instance, by learning processes. In the simplest case, locations at which activation peaks have frequently been induced may acquire higher resting levels, a learning mechanism sometimes used in connectionist models that invoke a bias term and termed the “memory trace” in DFT (Schneegans and Schöner 2008). Hebbian learning may similarly reshape the connections from a cortical surface and induce inhomogeneous levels of activation (Sandamirskaya 2014). Such inhomogeneities may be amplified in Neural Fields by the detection instability into macroscopic decisions! In the extreme case, an inhomogeneous subthreshold solution may be made unstable by a perfectly homogeneous input, which may be construed as an increase of the resting level, h, that pushes the field over the threshold. The location that first reaches threshold self-excites while at the same time suppressing activation everywhere else. As a result, a self-stabilized peak will arise at that location, in a sense, out of nowhere, because there was no localized input that specified the state that the Neural Field should instantiate. The input that induces a detection instability will not typically be completely homogeneous. It may favor particular locations in the field. Figure 5 illustrates a field that has a pre-shaped subthreshold state, u 0(x). The pre-shape may have arisen from a learning process in which two field locations were frequently activated. An input, s(x), that drives the field through the detection instability provides a broad boost but also contains a small localized component. The self-stabilized peak that emerges is positioned over the pre-activated location that is closest to the localized input. If the location that receives local input was varied continuously, the self-stabilized peak would continue to be positioned close to one of the two pre-activated locations. In this sense, the field responds categorically to spatially continuous input.
Fig. 5

A Neural Field with a pre-shaped subthreshold solution, u 0(x) (blue dashed line), is driven through the detection instability by an input, s(x) (red solid line), that contains a homogenous boost across the entire field and a small localized contribution. A self-stabilized peak solution, u p (x) (blue solid line), is induced. Its peak is positioned at the closest location that has prior activation, not the stimulated location

Neural Field Models of Embodied Cognition

Neural Field models have been used within the framework of DFT to account for a large and broad set of experimental signatures of the neural mechanisms underlying embodied cognition. Sensorimotor selection decisions were modeled for saccadic eye movements (Kopecz and Schöner 1995) and infant perseverative reaching (Thelen et al. 2001). Influences of non-stimulus factors were accounted for both for saccades (Trappenberg et al. 2001) and in the infant model. The capacity of Neural Field models to account for the confluence of multiple factors is, in fact, a major strength of the approach. An exhaustive account of the influence of many different task and intrinsic factors on motor response times was foundational for DFT (Erlhagen and Schöner 2002). That model also accounted for the time course of movement preparation as observed in the timed movement initiation paradigm. The same model was linked to neural population data from motor and premotor cortex (Bastian et al. 1998), and related ideas were used to account at a much more neurally detailed level for spatial decision making (Cisek 2006). The temporal evaluation of population activity in the visual cortex has been modeled using Neural Fields (Jancke et al. 1999), including recent data that assessed cortical activity through voltage-sensitive dye imaging (Markounikau et al. 2010).

DFT has been the basis of a neural processing approach to the development of cognition (Spencer and Schöner 2003) that emphasizes the sensorimotor basis of cognitive development but has reached to an understanding of how spatial and metric memory develops (Spencer et al. 2006) and how infants build memories through their looking behavior (Schöner and Thelen 2006; Perone and Spencer 2013). A Neural Field model of metric working memory has led to predictions that have been confirmed experimentally (Johnson et al. 2009) and included an account for change detection (Johnson et al. 2008). Neural Field models of motion pattern perception (Hock et al. 2003), attention (Fix et al. 2011), imitation (Erlhagen et al. 2006), and the perceptual grounding of spatial language (Lipinski et al. 2012) illustrate the breadth of phenomena accessible to Neural Field modeling.

Neural Fields can also be used to endow autonomous robots with simple forms of cognition (Bicho et al. 2000). This work has shown how peaks of activation may couple into motor control systems, an issue first addressed in Kopecz and Schöner (1995). Robotic implementations of the concepts of DFT have generally been useful demonstrations of the capacity of Neural Fields to work directly with realistic online sensory inputs and to control motor behavior in closed loop. Robotic settings have also been useful to develop new theoretical methods that solve conceptual problems. A notable example is the generation of serially ordered sequences of actions (Sandamirskaya and Schöner 2010). Because the inner states of Neural Fields are stable, they resist change. Advancing from one step in a sequence to a next requires, therefore, the controlled generation of instabilities, the release a previous state from stability, and enabling the activation of the subsequent state. The innovative step in (Sandamirskaya and Schöner 2010) was to use a representation of a “condition of satisfaction” that compares current sensory input to the sensory input predicted at the conclusion of an action. This concept has since been used in a variety of models that generate autonomously sequences of mental events. A robotic example is the autonomous acquisition of a scene representation from sequences of covert shifts of attention (Zibner et al. 2011).

Ongoing work in Neural Field modeling of embodied cognition advances on three fronts. On the one hand, the elementary forms of cognition must be integrated into the more complex dynamics of movement generation, coordination, and motor control (Martin et al. 2009). This entails dynamically more complex attractor states including limit cycles, as well as understanding how motor control in closed loop may interface with the population representations modeled by Neural Fields. On the other hand, a systematic push from embodied toward higher cognition (Sandamirskaya et al. 2013) ultimately aims at an understanding of all cognition in neural processing terms. Such an account faces the challenge of how to reach the power of symbol manipulation while maintaining the grounding in sensory and motor processes. Finally, Neural Field modeling needs to interface more closely with neural mechanisms of learning (Sandamirskaya 2014).



  1. Amari S (1977) Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 27:77–87PubMedCrossRefGoogle Scholar
  2. Bastian A, Riehle A, Erlhagen W, Schöner G (1998) Prior information preshapes the population representation of movement direction in motor cortex. Neuroreports 9:315–319CrossRefGoogle Scholar
  3. Bicho E, Mallet P, Schöner G (2000) Target representation on an autonomous vehicle with low-level sensors. Int J Robotics Res 19:424–447CrossRefGoogle Scholar
  4. Cisek P (2006) Integrated neural processes for defining potential actions and deciding between them: a computational model. J Neurosci 26(38):9761–9770PubMedCrossRefGoogle Scholar
  5. Cohen MR, Newsome WT (2009) Estimates of the contribution of single neurons to perception depend on timescale and noise correlation. J Neurosci 29(20):6635–6648PubMedCentralPubMedCrossRefGoogle Scholar
  6. Erlhagen W, Schöner G (2002) Dynamic field theory of movement preparation. Psychol Rev 109:545–572PubMedCrossRefGoogle Scholar
  7. Erlhagen W, Bastian A, Jancke D, Riehle A, Schöner G (1999) The distribution of neuronal population activation (DPA) as a tool to study interaction and integration in cortical representations. J Neurosci Methods 94:53–66PubMedCrossRefGoogle Scholar
  8. Erlhagen W, Mukovskiy A, Bicho E (2006) A dynamic model for action understanding and goal-directed imitation. Brain Res 1083(1):174–188PubMedCrossRefGoogle Scholar
  9. Fix J, Rougier N, Alexandre F (2011) A dynamic neural field approach to the covert and overt deployment of spatial attention. Cognit Comput 3(1):279–293CrossRefGoogle Scholar
  10. Fuster JM (2005) Cortex and mind – unifying cognition. Oxford University Press, OxfordCrossRefGoogle Scholar
  11. Hock HS, Schöner G, Giese MA (2003) The dynamical foundations of motion pattern formation: stability, selective adaptation, and perceptual continuity. Percept Psychophys 65:429–457PubMedCrossRefGoogle Scholar
  12. Jancke D, Erlhagen W, Dinse HR, Akhavan AC, Giese M, Steinhage A et al (1999) Parametric population representation of retinal location: {N}euronal interaction dynamics in cat primary visual cortex. J Neurosci 19:9016–9028PubMedGoogle Scholar
  13. Johnson JS, Spencer JP, Schöner G (2008) Moving to higher ground: the dynamic field theory and the dynamics of visual cognition. New Ideas Psychol 26:227–251PubMedCentralPubMedCrossRefGoogle Scholar
  14. Johnson JS, Spencer JP, Luck SJ, Schöner G (2009) A dynamic neural field model of visual working memory and change detection. Psychol Sci 20:568–577PubMedCentralPubMedCrossRefGoogle Scholar
  15. Kopecz K, Schöner G (1995) Saccadic motor planning by integrating visual information and pre-information on neural, dynamic fields. Biol Cybern 73:49–60PubMedCrossRefGoogle Scholar
  16. Lipinski J, Schneegans S, Sandamirskaya Y, Spencer JP, Schöner G (2012) A neuro-behavioral model of flexible spatial language behaviors. J Exp Psychol Learn Mem Cogn 38(6):1490–1511PubMedCentralPubMedCrossRefGoogle Scholar
  17. Markounikau V, Igel C, Grinvald A, Jancke D (2010) A dynamic neural field model of mesoscopic cortical activity captured with voltage-sensitive dye imaging. PLoS Comput Biol 6(9):e1000919PubMedCentralPubMedCrossRefGoogle Scholar
  18. Martin V, Scholz JP, Schöner G (2009) Redundancy, self-motion and motor control. Neural Comput 21(5):1371–1414PubMedCentralPubMedCrossRefGoogle Scholar
  19. McClelland JL, Rogers TT (2003) The parallel distributed processing approach to semantic cognition. Nat Rev Neurosci 4(4):310–322PubMedCrossRefGoogle Scholar
  20. Perone S, Spencer JP (2013) Autonomy in action: linking the act of looking to memory formation in infancy via dynamic neural fields. Cognit Sci 37(1):1–60CrossRefGoogle Scholar
  21. Riegler A (2002) When is a cognitive system embodied? Cognit Syst Res 3:339–348CrossRefGoogle Scholar
  22. Sandamirskaya Y (2014) Dynamic neural fields as a step toward cognitive neuromorphic architectures. Front Neurosci 7Google Scholar
  23. Sandamirskaya Y, Schöner G (2010) An embodied account of serial order: how instabilities drive sequence generation. Neural Netw 23(10):1164–1179PubMedCrossRefGoogle Scholar
  24. Sandamirskaya Y, Zibner SK, Schneegans S, Schöner G (2013) Using dynamic field theory to extend the embodiment stance toward higher cognition. New Ideas Psychol 31(3):322–339CrossRefGoogle Scholar
  25. Schneegans S, Schöner G (2008) Dynamic field theory as a framework for understanding embodied cognition. In: Calvo P, Gomila T (eds) Handbook of cognitive science: an embodied approach. Elsevier, Amsterdam/Boston/London, pp 241–271CrossRefGoogle Scholar
  26. Schöner G, Thelen E (2006) Using dynamic field theory to rethink infant habituation. Psychol Rev 113(2):273–299PubMedCrossRefGoogle Scholar
  27. Simons DJ, Levin DT (1997) Change blindness. Trends Cogn Sci 1(7):261–267PubMedCrossRefGoogle Scholar
  28. Spencer JP, Schöner G (2003) Bridging the representational gap in the dynamical systems approach to development. Dev Sci 6:392–412CrossRefGoogle Scholar
  29. Spencer JP, Simmering VR, Schutte AR (2006) Toward a formal theory of flexible spatial behavior: geometric category biases generalize across pointing and verbal response types. J Exp Psychol Hum Percept Perform 32(2):473–490PubMedCrossRefGoogle Scholar
  30. Spencer JP, Perone S, Johnson JS (2009) Dynamic field theory and embodied cognitive dynamics. In: Spencer J, Thomas M, McClelland J (eds) Toward a unified theory of development: connectionism and dynamic systems theory re-considered. Oxford University Press, Oxford, pp 86–118Google Scholar
  31. Thelen E, Schöner G, Scheier C, Smith L (2001) The dynamics of embodiment: a field theory of infant perseverative reaching. Brain Behav Sci 24:1–33CrossRefGoogle Scholar
  32. Trappenberg T (2008) Decision making and population decoding with strongly inhibitory neural field models. In: Heinke D, Mavritsak E (eds) Computational modelling in behavioural neuroscience: closing the gap between neurophysiology and behaviour. Psychology Press, London, pp 1–19Google Scholar
  33. Trappenberg T, Dorris MC, Munoz DP, Klein RM (2001) A model of saccade initiation based on the competitive integration of exogenous and endogenous signals in the superior colliculus. J Cogn Neurosci 13(2):256–271PubMedCrossRefGoogle Scholar
  34. Zibner SKU, Faubel C, Iossifidis I, Schöner G (2011) Dynamic neural fields as building blocks for a cortex-inspired architecture of robotic scene representation. IEEE Trans Auton Ment Dev 3(1):74–91CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Institut fuer NeuroinformatikRuhr-Universität BochumBochumGermany