On VLSI Implementation of Multiple Output Sequential Learning Networks

  • A. Bermak
  • H. Poulard
Conference paper


In this paper we propose a hardware implementation of a binary neural network architecture obtained from a new efficient constructive algorithm. This algorithm is particularly interesting because it can treat boolean as well as real valued classification problems with an arbitrary number of outputs. The networks obtained consist of binary neurons organized in two hidden layers. The first layer is implemented on a systolic architecture which represents a good tradeoff between speed and area. Due to the particular computation performed by the second hidden layer, its implementation is straightforward and well-suited to the systolic architecture. A limited number of logical gates is needed for its implementation. The output neurons are also easy to implement but require a small size memory.


Hide Layer Clock Cycle Logical Gate Hardware Implementation Output Neuron 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    A. Bermak and D. Martinez. A variable-precision systolic architecture for ANN computation. In Fifth International Conference on Microelectronics for Neural Networks and Fuzzy Systems. Lausanne, Feb. 1996.Google Scholar
  2. [2]
    S. Kung. VLSI array processors. Prentice Hall, New York, 1988.Google Scholar
  3. [3]
    M. Marchand, M. Golea, and P. Rújan. A convergence theorem for sequential learning in two-layer perceptron. Europhysics Lett, 11:487–492, 1990.CrossRefGoogle Scholar
  4. [4]
    H. Poulard and N. Hernandez. Two efficient constructive algorithms. Submitted paper,, 1996.Google Scholar
  5. [5]
    H. Poulard and N. Hernandez. A constructive algorithm for real valued multi-category classification problems. In Artificial Neural Networks and Genetic Algorithms, Wien, 1998. Springer-Verlag.Google Scholar
  6. [6]
    H. Poulard and S. Labrèche. A new algorithm for learning threshold unit. Technical report, Laboratoire d’Analyse et d’Architecture des Systèmes, 1996. Submitted paper, Scholar
  7. [7]
    D. Rumelhart, G. Hinton, and R. Williams. Learning internal representations by error propagation, volume I, chapter 8. MIT Press, Cambridge, MA, 1986.Google Scholar

Copyright information

© Springer-Verlag Wien 1998

Authors and Affiliations

  • A. Bermak
    • 1
  • H. Poulard
    • 2
  1. 1.Laboratoire d’Analyse et d’Architecture des Systèmes — CNRSToulouseFrance
  2. 2.ACTIAToulouseFrance

Personalised recommendations