A Library to Implement Neural Networks on MIMD Machines⋆

  • Yann Boniface
  • Frédéric Alexandre
  • Stéphane Vialle
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1685)


Executing a sequential implementation of an Neural Network (NN) generally induces a high cost in terms of computation time, including a large part for the learning phase. This cost together with the weak computation performance of a computer with regard to a human brain [3] make it difficult to test some complex large connectionist models, like some inspired from biological reality or some with a high dimensional input space or a large number of units. Moreover, NNs present a large amount of natural parallelism. Unfortunately, NN parallelism is very different from modern and general purpose parallel computer parallelism. NNs have a fine grain of parallelism and a natural message passing paradigm. On the opposite, modern parallel computers have a MIMD1 architecture. Some recent hardware development led to use shared memory as an efficient parallel programming way with high number of processors. The main goals of this project are to speed up NN executions and to decrease development time of parallel NN implementations, in order to quickly implement various kinds of NN and to run on parallel computers more complex simulations than sequential computers allow. We offer to connectionists a tool to develop their models with fine grain parallelism and to execute them onto DSM2 MIMD general purpose parallel computers.


  1. [1]
    Y. Boniface, F. Alexandre, and S. Vialle. A bridge between two paradigms for parallelism: Neural networks and general purpose mimd computers. IJCNN, 1999.Google Scholar
  2. [2]
    J. Hertz, A. Krogh, and R.G. Palmer. Introduction to the theory of neural computation. Addison-Wesley, 1992.Google Scholar
  3. [3]
    Y. Lallement. Intégration neuro-symbolique et intelligence artificielle, applications et implantation parallèle. PhD. Thesis, 1996.Google Scholar
  4. [4]
    W.S McCulloch and W.P Pitts.A logical calculus in the ideas immanent in nerveous activity. Bulletin of Matematical Biophysics, 1943.Google Scholar
  5. [5]
    M. Misra. Parallel environments for implementing neural network. Neural Computing Survey, 1:48–60, 1996.Google Scholar
  6. [6]
    H. Paugam-Moisy. The Handbook of Brain Theory and Neural Network., chapter Multiprocessor Simulation of Neural Networks. The MIT Press., 1995.Google Scholar
  7. [7]
    Silicon Graphics, Inc. Performance Tuning Optimization for Origin 2000 and Onyx2.

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Yann Boniface
    • 1
  • Frédéric Alexandre
    • 1
  • Stéphane Vialle
    • 2
  1. 1.LORIAVandæuvre-lès-Nancy cedexFrance
  2. 2.SupelecMetzFrance

Personalised recommendations