Breaking the Synaptic Dogma: Evolving a Neuro-inspired Developmental Network

  • Gul Muhammad Khan
  • Julian F. Miller
  • David M. Halliday
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5361)


The majority of artificial neural networks are static and lifeless and do not change themselves within a learning environment. In these models learning is seen as the process of obtaining the strengths of connections between neurons (i.e. weights). We refer to this as the ’synaptic dogma’. This is in marked contrast with biological networks which have time dependent morphology and in which practically all neural aspects can change or be shaped by mutual interactions and interactions with an external environment. Inspired by this and many aspects of neuroscience, we have designed a new kind of neural network. In this model, neurons are represented by seven evolved programs that model particular components and aspects of biological neurons (dendrites, soma, axons, synapses, electrical and developmental behaviour). Each network begins as a small randomly generated network of neurons. When the seven programs are run, the neurons, dendrites, axons and synapses can increase or decrease in number and change in interaction with an external environment. Our aim is to show that it is possible to evolve programs that allow a network to learn through experience (i.e. encode the ability to learn). We report on our continuing investigations in the context of learning how to play checkers.


Random Network Computational Function Game Tree Dendrite Branch Biological Neuron 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Marcus, G.: The Birth of the Mind. Basic Books (2004)Google Scholar
  2. 2.
    Smythies, J.: The Dynamic Neuron. BradFord (2002)Google Scholar
  3. 3.
    Rose, S.: The Making of Memory: From Molecules to Mind. Vintage (2003)Google Scholar
  4. 4.
    Koch, C., Segev, I.: The role of single neurons in information processing. Nature Neuroscience Supplement 3, 1171–1177 (2000)CrossRefGoogle Scholar
  5. 5.
    Kandel, E.R., Schwartz, J.H., Jessell, T.M.: Principles of Neural Science, 4th edn. McGraw-Hill, New York (2000)Google Scholar
  6. 6.
    Bestman, J., Santos Da Silva, J., Cline, H.: Dendrites: Dendrite Development. Oxford University Press, Oxford (2008)Google Scholar
  7. 7.
    Khan, G.: Thesis: Evolution of Neuro-inspired developmental programs capable of Learning. Department of Electronics, University of York (2008)Google Scholar
  8. 8.
    Koza, J.: Genetic Programming: On the Programming of Computers by Means of Natural selection. MIT Press, Cambridge (1992)zbMATHGoogle Scholar
  9. 9.
    Miller, J.F., Thomson, P.: Cartesian genetic programming. In: Poli, R., Banzhaf, W., Langdon, W.B., Miller, J., Nordin, P., Fogarty, T.C. (eds.) EuroGP 2000. LNCS, vol. 1802, pp. 121–132. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  10. 10.
    Khan, G., Miller, J., Halliday, D.: Coevolution of intelligent agents using cartesian genetic programming. In: Proc. GECCO, pp. 269–276 (2007)Google Scholar
  11. 11.
    Shannon, C.: Programming a computer for playing chess. Phil. Mag. 41, 256–275 (1950)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Dimand, R.W., Dimand, M.A.: A History of Game Theory: From the Beginnings to 1945, Routledge, vol. 1 (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Gul Muhammad Khan
    • 1
  • Julian F. Miller
    • 1
  • David M. Halliday
    • 1
  1. 1.Electronics DepartmentUniversity of YorkYorkUK

Personalised recommendations