Skip to main content

The Search for Consciousness in the Brain

  • Chapter
  • First Online:
  • 1086 Accesses

Part of the book series: Springer Series in Cognitive and Neural Systems ((SSCNS,volume 9))

Abstract

If consciousness is created by brain activity, either solely or in part, then the traces of the relevant brain activity should be able to be observed by suitably subtle experiments and sensitive enough experimental apparatus. Such is the route that has been followed over the last few decades by increasing numbers of neuroscientists to search for what are called the ‘neural correlates of consciousness (NCC) but, however, with rather uncertain results. The main feature of this uncertainty is, I suspect, due to the lack of clarity as to what precisely is to be discovered. In other words there is the difficulty of what exactly the brain activity represents as part of the upcoming conscious experience of a given subject?

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  • Amari S-I (1977) Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 27:77–87

    Article  PubMed  CAS  Google Scholar 

  • Awh E, Smith EE, Jonides J (1995) Human rehearsal processes and the frontal lobes: PET evidence. Ann N Y Acad Sci 769:97–117

    Article  PubMed  CAS  Google Scholar 

  • Corbetta M, Shulman GL (2002) Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci 3:201–215

    PubMed  CAS  Google Scholar 

  • Crick F (1994) The astonishing hypothesis. Scribner, New York

    Google Scholar 

  • Crick F, Koch C (1990) Towards a neurobiological theory of consciousness. Semin Neurosci 2:263–275

    Google Scholar 

  • Damasio A (2000) The feeling of what happens: body and emotion in the making of consciousness. Mariner Books, New York

    Google Scholar 

  • Dehaene S, Naccache L, Le Clec HG et al (1998) Imaging unconscious semantic priming. Nature 395(6702):597–600

    Article  PubMed  CAS  Google Scholar 

  • DeYoe EA, Van Essen DC (1988) Concurrent processing streams in monkey visual cortex. TINS 11(5):219–226

    PubMed  CAS  Google Scholar 

  • Driver J, Mattingley JB (1998) Parietal neglect and visual awareness. Nat Neurosci 1(1):17–22

    PubMed  CAS  Google Scholar 

  • Fellenz W, Taylor JG (2000) Establishing retinotopy by lateral-inhibition-type homogeneous neural fields. Neurocomputing 48:313–322

    Article  Google Scholar 

  • Fragopanagos N, Kockelkoren S, Taylor JG (2005) A neurodynamic model of the attentional blink. Brain Res Cogn Brain Res 24(3):568–586

    Article  PubMed  Google Scholar 

  • Giese MA (1998) Dynamic neural field theory for motion perception. Kluwer Academic Publishers, Boston

    Google Scholar 

  • Miall RC, Wolpert D (1996) Forward models for physiological motor control. Neural Netw 9:1265–1279

    Article  PubMed  Google Scholar 

  • Ohyama T, Nores WL, Murphy M, Mauk MD (2003) What the cerebellum computes. Trend Neurosci 26:222–227

    Article  PubMed  CAS  Google Scholar 

  • Panksepp J (2005) Affective consciousness: core emotional feelings in animals and humans. Conscious Cogn 14:30–80

    Article  PubMed  Google Scholar 

  • Pessoa L, McKenna M, Gutierrez E, Ungerleider LG (2002) Neural processing of emotional faces requires attention. Proc Natl Acad Sci U S A 99(17):11458–11463

    Article  PubMed  CAS  Google Scholar 

  • Petersen R (1997) Modelling learning in the brain. Ph.D. thesis, University of London, (unpublished)

    Google Scholar 

  • Petersen RS, Taylor JG (1996) Reorganization of somato-sensory cortex after tactile training. In: Touretsky DS, Mozer MC, Hasselmo ME (eds) Advances in neural information processing systems. MIT Press, Cambridge, MA, pp 82–88

    Google Scholar 

  • Stringer SM, Rolls ET, Trappenberg TP, de Araujo IET (2003a) Self-organising continuous attractor networks and motor function. Neural Netw 16:161–182

    Article  PubMed  CAS  Google Scholar 

  • Stringer SM, Rollls ET, Trappenberg TP (2003b) Self-organizing continuous attractor networks with multiple active packets, and the representation of space. Neural Netw 17(1):5–27

    Article  Google Scholar 

  • Takeuchi A, Amari S-I (1999) Formation of topographic maps and columnar microstructures in nerve fields. Biol Cybern 35(2):63–72

    Article  Google Scholar 

  • Taylor JG (1997) Perception by neural networks. Neural Netw World 7:363–395

    Google Scholar 

  • Taylor JG (1999) Towards the networks of the brain: from brain imaging to consciousness. Neural Netw 12:943–959

    Article  PubMed  Google Scholar 

  • Taylor JG (2000a) The central representation: the where, how and what of consciousness. In: White KE (ed) The emergence of mind. Fondazione Carlo Erba, Milan, pp 117–148

    Google Scholar 

  • Taylor JG (2000b) A control model for attention and consciousness. Soc Neurosci Abstr, 26, 2231#839.3

    Google Scholar 

  • Taylor JG (2000c) Neural ‘bubble’ dynamics in two dimensions: foundations. Biol Cybern 80:393–409

    Article  Google Scholar 

  • Taylor JG (2001) The central role of the parietal lobes in consciousness. Conscious Cogn 10:379–417

    Article  PubMed  CAS  Google Scholar 

  • Taylor JG (2003) Paying attention to consciousness. Prog Neurobiol 41:305–335

    Article  Google Scholar 

  • Taylor JG, Alavi FN (1995) A global competitive neural network. Biol Cybern 72:233–248

    Article  PubMed  CAS  Google Scholar 

  • Taylor JG, Fragopanagos N (2005) The interaction of attention and emotion. Neural Netw 18(4):353–369

    Article  PubMed  Google Scholar 

  • Taylor JG, Rogers M (2002) A control model of the movement of attention. Neural Netw 15:309–326

    Article  PubMed  CAS  Google Scholar 

  • Taylor JG, Fragopanagos N, Cowie R, Douglas-Cowie E, Fotinea S-E, Kollias S (2003) An emotional recognition architecture based on human brain structure. In: Proceedings of ICANN/ICONIP 2003. Springer, Berlin, pp 1133–1140

    Google Scholar 

  • Trappenberg TP, Dorris M, Klein RM, Munroe DP (2001) A model of saccade initiation based on the competitive integration of exogenous and endogenous signals from the superior colliculus. J Cognit Neurosci 13:256–271

    Article  CAS  Google Scholar 

  • van Essen DC, Deyoe EA (2008) Concurrent processing in the primate visual cortex. In: Gazzaniga M (ed) Cognitive neuroscience. MIT Press, Cambridge, MA

    Google Scholar 

  • Wolpert DM, Miall RC, Kawato M (1998) Internal models in the cerebellum. Trend Cognit Sci 2:338–347

    Article  CAS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendix: A Continuum Neural Field Model of the Brain

Appendix: A Continuum Neural Field Model of the Brain

2.1.1 A.1 Mathematics of the Simple Brain

A CNFT model is based on the approximation of a module of neurons as composed of a two-dimensional sheet of neurons, with the neuron at the two-dimensional position r on the sheet having membrane potential V(r). The simplification in this ‘sheet-like’ assumption allows us to consider many neurons at once, although we can relatively easily reduce the sheet back to a finite number of neurons by using the localised distributions of neurons at a finite set of points on the sheet.

The dynamics of the neurons is also greatly simplified by assuming a graded response pattern for each neuron output (although this can be extended to spiking neurons if needed). Thus the output of each neuron is taken to be some function f( ) of its potential; the simplest case being a step function, although results have been obtained for more general sigmoid functions. The simplest dynamics is taken as

$$ \uptau \mathrm{dV}\left(\mathbf{r}\right)/\mathrm{dt}=-\mathrm{V}\left(\mathbf{r}\right)+{\mathrm{w}^*}\mathrm{f}\left(\mathrm{V}\right)\left(\mathbf{r}\right)+\mathrm{I}\left(\mathbf{r}\right)-\mathrm{h} $$
(2.1)

where I is the external input to the module at that point, w(r, r′) denotes the lateral connection strength between the neuron at r′ and that at r, −h is a constant inhibitory bias to all neurons to assure stability and suitable competition between neurons, and * is the usual symbol for the convolution product taken over the positions of the module, defined as:

$$ {\mathrm{w}^*}\mathrm{f}\left(\mathrm{V}\right)\left(\mathbf{r}\right)={\displaystyle \int \mathrm{w}\left(\mathbf{r},{\mathbf{r}}^{\mathbf{\prime}}\right)\mathrm{f}\left(\mathrm{V}\left({\mathbf{r}}^{\mathbf{\prime}}\right)\right)\mathrm{d}{\mathbf{r}}^{\mathbf{\prime}}} $$

We now extend Eq. (2.1) to a set of interacting modules as

$$ \mathbf{T} \mathrm{d}\mathbf{V}\left(\mathbf{r}\right)/\mathrm{dt}=-\mathbf{V}\left(\mathbf{r}\right)+{\mathrm{W}^*}\mathrm{f}\left(\mathbf{V}\right)\left(\mathbf{r}\right)+\mathbf{I}\left(\mathbf{r}\right)-\mathrm{H} $$
(2.2)

where the extension of (2.1)–(2.2) is achieved by taking V to be a vector-valued field of membrane potentials (each component denoting the neural field for a given module), W(r′, r) now denoting the matrix of field connections, with diagonal connections being the lateral one entering originally in (2.1), the off-diagonal ones being those connecting different modules, −H is now a diagonal matrix, with constant values in each entry for a given module although with possible differences across modules to allow different levels of overall inhibition, and I denoting a vector field of external inputs, each component again being associated with a given neural field module; the matrix T denotes a diagonal matrix of time constants (where also in (2.1) this can be extended to different time constants for different neuron positions if so desired). We note that we are simplifying by taking the same co-ordinates for each module; again that can be generalized. Further we are only taking neurons of the same type in (2.1) and (2.2); again that can be extended to those of inhibitory and excitatory form or of different sub-populations, by suitably extending the notation in these equations.

We now indicate some of the features expected from (2.2), extending those possessed by (2.1) – bubble existence, dynamics of bubbles and learning structures.

  1. (a)

    Basic Features of the Dynamics

    A Liapunov function can be derived for the dynamics of (2.2) extending that for (2.1) in the case of symmetric connection matrix W (across modules as well across the lateral connections in each module), as in (Giese 1998), thus providing general stability arguments.

  2. (b)

    Existence of Bubbles

    The one- and two-dimensional bubble analyses of (Amari 1977; Taylor 2000a, b, c), and the many simulations in the references (Fellenz and Taylor 2000; Taylor 1997; Petersen and Taylor 1996; Petersen 1997; Trappenberg et al. 2001; Giese 1998; Stringer et al. 2003a, b), lead to the expectation that there will exist multiple bubbles, across a range of modules. It is possible to develop equations for coupled bubbles from (2.2).

  3. (c)

    Dynamics of Bubbles

    Bubbles have been found (Amari 1977; Taylor 1997, 2000a, b, c; Takeuchi and Amari 1999; Fellenz and Taylor 2000; Petersen and Taylor 1996; Petersen 1997; Trappenberg et al. 2001; Giese 1998; Stringer et al. 2003a, b) to be driven to flow to regions of highest input. A similar situation is expected to occur for the coupled bubbles in the expanded version (2) of CNFT.

  4. (d)

    Learning Structures

    The original one-dimensional work of (Takeuchi and Amari 1999) was extended in (Taylor 2000a, b, c) to two dimensions and to applications to specific brain modules in (Fellenz and Taylor 2000; Taylor 1997; Petersen and Taylor 1996; Petersen 1997; Trappenberg et al. 2001; Giese 1998; Stringer et al. 2003a, b). The most crucial feature of this study was the presence and exploitation of instability in the learning law dynamics, producing discontinuous periodic structures mapping higher dimension input spaces down to the two-dimensional sheet in a clearly defined, even analytic, manner. Similar structures are to be expected in the extended case of (2.2).

It might be supposed that the system of Eq. (2.2) can be reduced to an equation identical to (2.1), though now with the position vectors of the neurons as obtained, for example, by squashing the various modules together in some way. The single lateral interaction weight function W(r, r′) (with r and r′ running over the whole set of adjoined modules) will now be highly non-local, with no structure such as a Mexican hat or other locally bounded function which is centrally positive and then turns negative far enough away from the origin (as arises in each component of the lateral connection matrix W in (2.2)). Each component of the matrix W introduced in (2.2) will have now lost its intuitive character in the new squashed representation with the form of (2.1).

It is clear that the module-based structure of (2.2), containing as it does an inherent similarity to the cortex, is of great value to obtain extension to the hierarchy of modules of the results such as those obtained in (Amari 1977; Taylor 1997, 2000a, b, c; Takeuchi and Amari 1999; Fellenz and Taylor 2000; Petersen and Taylor 1996; Petersen 1997; Trappenberg et al. 2001; Giese 1998; Stringer et al. 2003a, b), as any summary reading of these papers would show. This is especially so for those results which depend crucially on the Mexican hat structures used in detailed simulation of psychophysical results (Fellenz and Taylor 2000; Taylor 1997; Petersen and Taylor 1996; Petersen 1997; Trappenberg et al. 2001; Giese 1998; Stringer et al. 2003a, b). In the new ‘squashed’ framework, however, we lose the transparency needed to extend the various results of (Amari 1977; Taylor 1997, 2000a, b, c; Takeuchi and Amari 1999; Fellenz and Taylor 2000; Petersen and Taylor 1996; Petersen 1997; Trappenberg et al. 2001; Giese 1998; Stringer et al. 2003a, b) from a single module to a hierarchy of such modules. Thus we would not expect it will be easy to prove, in the squashed representation, the results of, say, the existence of locally bounded ‘bubbles’ (Amari 1977; Taylor 2000a, b, c; Trappenberg et al. 2001; Giese 1998; Stringer et al. 2003a, b) coupled across various modules, although in general with different sizes, or of the development, through Hebbian learning, of suitably periodic structures of afferent weights from one module to the next in the original hierarchy (as known experimentally, although with increasing wavelength in going from V1 to V2 to V4 and so on (DeYoe and Van Essen 1988), especially fig 24.6 in that reference). Nor will the detailed properties of these solutions be easily derivable. This is due to the loss of the ability to extend all of the elegant properties of solutions of (2.1) upwards in the hierarchy by means of the transparency of the structure of (2.2) when working in the far less intuitively understandable squashed multi-modular system being considered as an alternative.

The brain has not used such a ‘squashed’ representation to perform its dynamics, in that the separate modules are kept geographically separate. It would appear also to be computationally inefficient to have such a representation, or nature would have preferred it and squashed the modules all together.

A final general point about the structures of (2.1) and (2.2) is concerned with the inhibitory bias, which arises as the inhibitory constant fields arising from the constant h in (2.1) or its extension to the diagonal matrix H in (2.2). These have a strong effect, as any reading of the papers of (Amari 1977; Taylor 1997, 2000a, b, c; Takeuchi and Amari 1999; Fellenz and Taylor 2000; Petersen and Taylor 1996; Petersen 1997; Trappenberg et al. 2001; Giese 1998; Stringer et al. 2003a, b) indicate, in association with the existence of bubble solutions and periodic unstable afferent connection strength learning equations. The constant(s) could be absorbed into the definition of the potential V(r), but again this hides the meaning ascribed to V as corresponding to a local, laterally connected field, vanishing at infinity. This is all part of the general structure of the CNFT approach, as more fully described in (Amari 1977; Taylor 1997, 2000a, b, c; Takeuchi and Amari 1999; Fellenz and Taylor 2000; Petersen and Taylor 1996; Petersen 1997; Trappenberg et al. 2001; Giese 1998; Stringer et al. 2003a, b).

Let us consider how the structure given by (2.2) can be used to extend some of the specific results of (2.1). We consider two modules only, as has already been considered in (Taylor 1997) in the case of color illusions, involving two color modules, one for each color, and the bubble solutions found to fit the patterns observed when a blue/red boundary was stabilized on subjects’ retinas (one being the non-intuitive ‘sea’ of blue and red mixed together, as experienced by some subjects). In the asymptotic limit in time, (2.2) becomes:

$$ \mathrm{u}\left(\mathrm{x}\right)={\displaystyle \int \mathrm{W}\left(\mathrm{x}-{\mathrm{x}}^{\prime}\right)\mathrm{f}\left[\mathrm{u}\left({\mathrm{x}}^{\prime}\right)\right]\mathrm{d}{\mathrm{x}}^{\prime }+\int {\mathrm{W}}^{\prime}\left(\mathrm{x}-{\mathrm{y}}^{\prime}\right)\mathrm{f}\left[\mathrm{v}\left({\mathrm{y}}^{\prime}\right)\right]\mathrm{d}{\mathrm{y}}^{\prime }+\mathrm{h}} $$
(2.3a)
$$ \mathrm{v}\left(\mathrm{y}\right)={\displaystyle \int {\mathrm{W}}^{{\prime\prime}}\left(\mathrm{y}-{\mathrm{y}}^{\prime}\right)\mathrm{f}\left[\mathrm{v}\left({\mathrm{y}}^{\prime}\right)\right]\mathrm{d}{\mathrm{y}}^{\prime }+\int {\mathrm{W}}^{\prime\prime\prime}\left(\mathrm{y}-{\mathrm{x}}^{\prime}\right)\mathrm{f}\left[\mathrm{u}\left({\mathrm{x}}^{\prime}\right)\right]\mathrm{d}{\mathrm{x}}^{\prime }+{\mathrm{h}}^{\prime }} $$
(2.3b)

where we have assumed translation invariance of the interconnection matrices, and the variables x, etc. denote suitable co-ordinates on the cortical surface. In terms of the standard notation (Amari 1977; Taylor 2000a, b, c), and defining the function

$$ \mathrm{U}\left(\mathrm{x}\right)={\displaystyle {\int}_0^{\mathrm{x}}\mathrm{W}\left(\mathrm{y}\right)\mathrm{dy}} $$
(2.4)

we can rewrite (2.3a, b), for a local ‘double bubble’ solution, with x restricted to the interval [a, b] and y in the interval [c, d], as

$$ \begin{array}{l}\mathrm{u}\left(\mathrm{x}\right)=\left[\mathrm{U}\left(\mathrm{x}-\mathrm{a}\right)-\mathrm{U}\left(\mathrm{x}-\mathrm{b}\right)\right]+\left[{\mathrm{U}}^{\prime}\left(\mathrm{x}-\mathrm{c}\right)-{\mathrm{U}}^{\prime}\left(\mathrm{x}-\mathrm{d}\right)\right]+\mathrm{h}\\ {}\mathrm{v}\left(\mathrm{y}\right)=\left[{\mathrm{U}}^{{\prime\prime}}\left(\mathrm{y}-\mathrm{a}\right)-{\mathrm{U}}^{{\prime\prime}}\left(\mathrm{y}-\mathrm{b}\right)\right]+\left[{\mathrm{U}}^{\prime\prime\prime}\left(\mathrm{y}-\mathrm{c}\right)-{\mathrm{U}}^{\prime\prime\prime}\left(\mathrm{y}-\mathrm{d}\right)\right]+{\mathrm{h}}^{\prime}\end{array} $$
(2.5)

where

$$ \mathrm{u}\left(\mathrm{a}\right)=\mathrm{u}\left(\mathrm{b}\right)=\mathrm{v}\left(\mathrm{c}\right)=\mathrm{v}\left(\mathrm{d}\right)=0 $$
(2.6)

Applying the constraints (2.6) to (2.5), we obtain an extension of the set of Eq. (2.6) of (Amari 1977), as

$$ \begin{array}{c}0=\mathrm{h}+\mathrm{U}\left(\mathrm{b}-\mathrm{a}\right)+\left[{\mathrm{U}}^{\prime}\left(\mathrm{a}-\mathrm{c}\right)-{\mathrm{U}}^{\prime}\left(\mathrm{a}-\mathrm{d}\right)\right]\\ {}0=\mathrm{h}+\mathrm{V}\left(\mathrm{b}-\mathrm{a}\right)+\left[{\mathrm{U}}^{\prime}\left(\mathrm{b}-\mathrm{c}\right)-{\mathrm{U}}^{\prime}\left(\mathrm{b}-\mathrm{d}\right)\right]\\ {}0={\mathrm{h}}^{\prime }+{\mathrm{U}}^{\prime\prime\prime}\left(\mathrm{d}-\mathrm{c}\right)+\left[{\mathrm{U}}^{{\prime\prime}}\left(\mathrm{c}-\mathrm{a}\right)-{\mathrm{U}}^{{\prime\prime}}\left(\mathrm{c}-\mathrm{b}\right)\right]\\ {}0={\mathrm{h}}^{\prime }+{\mathrm{U}}^{\prime\prime\prime}\left(\mathrm{d}-\mathrm{c}\right)+\left[{\mathrm{U}}^{{\prime\prime}}\left(\mathrm{c}-\mathrm{a}\right)-{\mathrm{U}}^{{\prime\prime}}\left(\mathrm{c}-\mathrm{b}\right)\right]\end{array} $$
(2.7)

The Eq. (2.7) can be investigated for coupled ‘bubble’ solutions, and these can be found, as can extension of the theorems of (Amari 1977) on the nature of these solutions (as also discussed in (Taylor 1997)).

The system of Eq. (2.5), and the time dependent form of equations they arose from in (2.2), can also be analyzed for stability, by using a first-order perturbation approach. The basic result is the coupled equations (in a set of coupled module indexed by i, j)

$$ \begin{aligned}[b] {\mathrm{d}\Delta}_{\mathrm{i}}/\mathrm{dt}&=\left(1/{\uptau}_{\mathrm{i}}\right) \Big[\left(1/{\mathrm{c}}_{\mathrm{i}1}+1/{\mathrm{c}}_{\mathrm{i}2}\right)\Big\{\left[{\Sigma}_{\mathrm{j}}{\mathrm{W}}_{\mathrm{i}\mathrm{j}}\left({\mathrm{x}}_{\mathrm{i}2}-{\mathrm{x}}_{\mathrm{j}1}\right)-{\mathrm{W}}_{\mathrm{i}\mathrm{j}}\left({\mathrm{x}}_{\mathrm{i}2}-{\mathrm{x}}_{\mathrm{j}2}\right)\right] \\ & \quad +{\mathrm{h}}_{\mathrm{i}}-\left[{\Sigma}_{\mathrm{j}}{\mathrm{W}}_{\mathrm{i}\mathrm{j}}\left({\mathrm{x}}_{\mathrm{i}1}-{\mathrm{x}}_{\mathrm{j}1}\right)-{\mathrm{W}}_{\mathrm{i}\mathrm{j}}\left({\mathrm{x}}_{\mathrm{i}1}-{\mathrm{x}}_{\mathrm{j}2}\right)\right] \Big\}\Big] \end{aligned} $$
(2.8)

where Δi = xi2 − xj1 is the width of the bubble in the i’th module, and with the definition ci1 = δui (xi1, t)/δt, and similarly for ci2. These equations extend those of stability in (Amari 1977), and so allow for the determination of the stability of bubbles and for their movement in the presence of new inputs, in the coupled module situation. Again these results could not easily have been obtained by using the squashed version of the coupled module Eq. (2.2).

2.1.2 A.2 Insertion of Control Structures

So far we have only extended the standard CNFT model of a single module to that of several such modules, without any understanding of how functional differentiation can be included in the system. We now turn to that important aspect. To justify our approach we need to accept that we cannot expect our extended model to learn its feed-forward and feedback connections all on its own, without any use of genetic memory. This was built up over hundreds of millions of years by pressure of the environment, which is changing all the time. Such pressure has led to crucial functional variations between modules, that allow them to be functionally differentiated into input processing modules, semantic map modules, higher level control modules and response modules, as well as others (and also with differentiation at sub-cortical level).

The first and fourth of these types of modules have already been discussed in the brain context in (Fellenz and Taylor 2000; Taylor 1997; Petersen and Taylor 1996; Petersen 1997; Trappenberg et al. 2001; Giese 1998; Stringer et al. 2003a, b). Here we turn specifically to the third class of modules, those for attention control, by suitable assumptions on the lateral connection matrix W in terms of the depth, strength and width of the lateral connection matrix internal to a module, as well as by temporal flow of activity. The connection matrix elements affect the size of bubbles, and the overall level of the WTA nature of the module. The higher level modules in parietal lobe will therefore be allocated large values of inhibitory connections so as to provide a strong bias towards competition and hence generation of attention control signals.

The temporal flow of activity of the brain has been observed in many ERP studies. It is observed that early input flows through low-level sensory cortices rapidly to prefrontal cortex, and the incoming information is then used to control later processing by feedback through parietal and temporal lobes. Such a flow pattern impresses on the brain a clear functional differentiation: prefrontal cortices act as goal systems to control more detailed lower level processing.

A set of coupled CNFT equations were developed in (Taylor and Rogers 2002), and used as the basis of simulation of the Posner attention benefit effect, based on the above features. The simulation used differentiation of function both by differences in lateral connectivity and inhibition across modules as well as by differences in the temporal flow of activity across modules. In particular we incorporated in (Taylor and Rogers 2002) the early flow of activity to frontal lobes so as to act as an exogenous goal bias to the attention movement controller in parietal lobe, and thence feeding back activity to the sensory (and motor) cortices. This feedback is taken to be achieved by attention contrast gain, so is applied in a quadratic sigma-pi manner to the input on the sensory cortical neurons. In terms of three modules, the goal module with neurons with activity g(r), the attention movement controller with activity v(r) and the input sensory module with activity u(r), the resulting control equations are (Taylor and Rogers 2002):

$$ \uptau \mathrm{du}/\mathrm{dt}=-\mathrm{u}+{\mathrm{w}^*}\mathrm{I}+{\mathrm{w}}^{\prime**}\mathrm{I}\times \mathrm{f}\left(\mathrm{v}\right)-\mathrm{h} $$
(2.9)
$$ \uptau \mathrm{dv}/\mathrm{dt}=-\mathrm{v}+{\mathrm{w}}^{{\prime\prime}*}\mathrm{f}\left(\mathrm{g}\right) $$
(2.10)
$$ \mathrm{g}={\mathrm{g}}_{\mathrm{des}} $$

where gdes is the desired goal, and the dynamics in the goal system is being neglected. We can include both endogenous and exogenous attention goals by choosing gdes as the external input (in the exogenous case) or as a given externally determined activation to the goal system in prefrontal cortex. In (2.9) we use the notation w**I × f(v) to denote the quadratic sigma-pi contrast gain amplification input, with

$$ {\mathrm{w}^{**}}\mathrm{I}\times \mathrm{f}\left(\mathrm{v}\right)={\displaystyle \int \mathrm{w}\left(\mathrm{r},{\mathrm{r}}^{\prime },{\mathrm{r}}^{{\prime\prime}}\right)\mathrm{I}\left({\mathrm{r}}^{\prime}\right)\mathrm{f}\left(\mathrm{v}\left({\mathrm{r}}^{{\prime\prime}}\right)\right)\mathrm{d}{\mathrm{r}}^{\prime}\mathrm{d}{\mathrm{r}}^{{\prime\prime} }} $$
(2.11)

as was used in [21] and references therein.

We have already noted that the attention control system (2.9), (2.10) can be used to simulate the benefit in response time to an attended stimulus and to related competitive processes. It has been extended to sensory motor attention control (Taylor and Fragopanagos 2005; Taylor 2003).

More generally, the above control model has been extended so as to contain a monitor and a predictor of the future state, as in the CODAM model (Taylor 2003) and applied recently to simulate the attention blink in (Fragopanagos et al. 2005). This was achieved by including a working memory or buffer module WM and a monitor MON. The new system of equations which update the set (2.9), (2.10) are now

$$ \uptau \mathrm{du}/\mathrm{dt}=-\mathrm{u}+{\mathrm{w}^*}\mathrm{I}+{\mathrm{w}}^{\prime**}\mathrm{I}\times \mathrm{f}\left(\mathrm{v}\right)-\mathrm{h} $$
(2.12)
$$ \uptau \mathrm{dv}/\mathrm{dt}=-\mathrm{v}+{\mathrm{w}}^{{\prime\prime}*}\mathrm{f}\left(\mathrm{g}\right)+{\mathrm{w}}^{\prime\prime\prime *}\mathrm{F}\left(\mathrm{MON}\right)-{{\mathrm{w}}^{\prime}}^{\prime\prime\prime *}\mathrm{f}\left(\mathrm{WM}\right) $$
(2.13)
$$ \mathrm{g}={\mathrm{g}}_{\mathrm{des}} $$
(2.14)
$$ \mathrm{MON}=\left|\mathrm{WM}-\mathrm{g}\right| $$
(2.15)
$$ {\uptau}^{\prime}\mathrm{dWM}/\mathrm{dt}=-\mathrm{WM}+\mathrm{f}\left(\mathrm{u}\right) $$
(2.16)

where we have assumed that the input to the WM from the input processing layer is mainly excitatory. Moreover we have included a different time constant for the working memory buffer WM in (2.16) in order to allow for longer time constants (although we used in the AB simulation in (Fragopanagos et al. 2005) a set of reciprocally-coupled neurons as having greater flexibility, rather than manipulating the neuron time constants).We also use the extended Hebbian learning law

$$ \updelta {\mathrm{w}}^{\prime}\left(\mathrm{x},{\mathrm{x}}^{\prime },{\mathrm{x}}^{{\prime\prime}}\right)=\upvarepsilon \mathrm{f}\left(\mathrm{u}\left(\mathrm{x}\right)\right).\mathrm{f}\Big(\mathrm{v}\left({\mathrm{x}}^{{\prime\prime}}\right).\mathrm{I}\left({\mathrm{x}}^{\prime}\right) $$

where ε is a learning rate.

In Eq. (2.13) (and developed more fully in (Fragopanagos et al. 2005)) we included inhibitory feedback from the WM system to all nodes not coding for the input I in the attention movement In this way attention becomes a scarce resource: if a large attention load must be processed, say with many distracters, then the error can become large in the MON module, so will boost the effect of creating the attention control signal, as in (2.13).

We still have to face hard learning problems. The possibility of applying developmental knowledge to the learning process also needs to be considered. This can be achieved by including learning in an incremental fashion, so that lower level representations will be learnt first and stabilized before further learning, under top-down control, is allowed. Furthermore the manner in which goal representations in prefrontal cortex could arise needs to be considered.

Emotion is minimally included by the addition of ‘valence’ modules (amygdala and orbito-frontal cortex), following the emotion brain architecture already presented elsewhere (Taylor et al. 2003), but now represented in the CNFT framework. We refer the interested reader to these developments, and to the results of Chap. 13 for a more complete discussion.

2.1.3 A.3 Results of the Program

The basic results of the above program are of three sorts:

  1. 1.

    Understanding of the level of coupled bubble formation and dynamics, under simple feed-forward- & feedback coupling assumptions, with sizes and expected influences of bubbles on each other determined by relative parameter choices and fan-in values in the various modules. Some progress on this has already been made (Taylor 1997), and further structural results were presented in Eqs. (2.3), (2.4), (2.5), (2.6), (2.7), (2.8), (2.9), (2.10), and (2.11);

  2. 2.

    Learning of cortical representations, both of feedback and feed-forward form can be obtained, supporting topographic spatial and localized object representations (using pre-specified fan-ins depending on the site of the CNFT module being considered.

  3. 3.

    Provision of a basis for addition of further complexity into the system, as well as applying other criteria, such as information maximization, to constrain the approach. One of the unknowns in a general control problem is as to the quantity (if any) being optimized in the control system; for the brain it is expected to be a function of the total reward, although this cannot be evaluated solely by the net dopamine influx (since there are internal sources).

  4. 4.

    It could be argued that the bubbles of neural activity arising in certain sites in the above coupled modules of the CNFT approach could be interpreted as generating consciousness. But that is an unsupported conjecture, needing much more careful discussion of the nature of these bubbles. There are also many other features left out of such an identification: the relation to attention, the fact that prolonged but non-conscious brain activity can be detected in various sites in the brain. But the main problem is that there is rather little detailed analysis of the various modules in the coupled CNFT approach. Such analysis is needed to relate to the many results from brain imaging experiments on various aspects of consciousness.

2.1.4 A.4 Conclusions

The above discussion provides a general framework with which to attack the dynamics of the brain. It allows stable state analysis as well as extension to the temporal dynamics of a set of interacting CNFT modules. Learning presents also dynamical features that allow the analysis of pattern structure of the synaptic weights. The nature of emotional modulation has yet to be properly inserted by use of reward learning, but this can be included in subsequent versions of the multi-modular CNFT brain model. Much work lies ahead, but general features have already been obtained that indicate the value of the approach.

Open questions still to be faced in the above framework are numerous, but among them can be singled out:

  1. (a)

    What is the manner in which synchronization of neural activity is achieved (by extension to spiking neurons)?

  2. (b)

    What is the mode of operation of acetylcholine as a modulator both of the neural dynamics and the synaptic learning process so as to arrive at the contrast gain attention amplification feedback of Eqs. (2.8) and (2.10)?

  3. (c)

    What level of interaction occurs between modalities and between sensory processing and motor response (as observed as in the psychological refractory effect)?

  4. (d)

    How can the more complex mathematical structures obtained by adding to the above CNFT model the increasing complexity of the real brain, (as indicated in Sect. 2.2), be analyzed?

  5. (e)

    How important is the greater complexity granted to the living brain, and which particular components are most important?

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Taylor, J.G. (2013). The Search for Consciousness in the Brain. In: Solving the Mind-Body Problem by the CODAM Neural Model of Consciousness?. Springer Series in Cognitive and Neural Systems, vol 9. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-7645-6_2

Download citation

Publish with us

Policies and ethics