Advertisement

Increasing the Biological Inspiration of Neural Networks

  • Domenico Parisi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2486)

Abstract

We describe three extensions of current neural network models in the direction of increasing their biological inspiration. Unlike “classical” connectionism, Artificial Life does not study single disembodied neural networks living in a void but it studies evolving populations of neural networks with a physical body and a genotype and living in a physical environment. Another extension of current models is in the direction of richer, recurrent network structures which allow the networks to self-generate their own input, including linguistic input, in order to reproduce typically human “mental life” phenomena. A third extension is the attempt to reproduce the noncognitive aspects of behavior (emotion, motivation, global psychological states, behavioral style, psychogical disorders, etc.) by incorporating other aspects of the nervous system in neural networks (e.g., sub-cortical structures, neuro-modulators, etc.) and by reproducing the interactions of the nervous system with the rest of the body and not only with the external environment.

Keywords

Artificial life Mental life 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Langton, C. G. (eds): Artificial Life: An Overview. MIT Press, Cambridge, Mass, (1995).Google Scholar
  2. [2]
    Parisi, D.: Neural Networks and Artificial Life. In: G. Parisi, D. Amit (eds.): Frontiers of Life. Academic Press, San Diego, Cal. (in press).Google Scholar
  3. [3]
    Rumelhart, D.E., McClelland, J.L.: Parallel Distributed Processing. Explorations in the Microstructure of Cognition. Volume 1. Foundations. MIT Press, Cambridge, Mass, (1986).Google Scholar
  4. [4]
    Nolfi, S., Floreano, D.: Evolutionary Robotics. MIT Press, Cambridge, Mass, (2001).Google Scholar
  5. [5]
    Di Ferdinando, A., Parisi, D.: Internal representations of sensory input reflect the motor output with which organisms respond to the input. In: Carsetti, A. (ed.): Seeing and Thinking. Kluwer, Amsterdam (in press).Google Scholar
  6. [6]
    Parisi, D., Cecconi, F., Nolfi, S.: Econets: Neural Networks that Learn in an Environment. Network 149–168 (1990).Google Scholar
  7. [7]
    Holland, J. H.: Adaptation in Natural and Artificial Systems. MIT Press, Cambridge, Mass, (1992).Google Scholar
  8. [8]
    Di Ferdinando, A., Calabretta, R., Parisi, D.: Evolving Modular Architectures for Neural Networks. In: French, R., J.P. Sougn (eds.): Connectionist Models of Learning, Development, and Evolution. Springer-Verlag, Berlin Heidelberg New York, (2001).Google Scholar
  9. [9]
    Grossberg, S.: How Allucinations May Arise from Brain Mechanisms of Learning, Attention, and Volition. Journal of the International Neuropsychological Society (in press).Google Scholar
  10. [10]
    Grossberg, S.: The Imbalanced Brain: From Normal Behavior to Schizophrenia. Biological Psychiatry (in press).Google Scholar
  11. [11]
    LeDoux, J.: Synaptic Self. How Our Brains Become What We Are. Viking Press, New York (2002).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Domenico Parisi
    • 1
  1. 1.Institute for Cognitive Sciences and TechnologiesNational Research CouncilRomeItaly

Personalised recommendations