Advertisement

Annales des Télécommunications

, Volume 46, Issue 1–2, pp 142–155 | Cite as

Intégration ďarchitectures à base de réseaux de neurones formels : un défi pour les technologies submicroniques

  • Michel Weinfeld
Article
  • 29 Downloads

Résumé

Les architectures connexionnistes, utilisant le concept de neurone formel, peuvent présenter des propriétés rappelant certaines fonctions cognitives simples. Elles peuvent donc conduire à des machines nouvelles de traitement du signal, mettant en Œuvre des fonctions de classification avec généralisation ou de mémoire associative, par exemple. Ľimplantation de ces fonctions dans le silicium peut être avantageuse, si ľon arrive à surmonter les problèmes de connectivité et de réalisation des coefficients synaptiques, que ce soit en technologie analogique ou bien en technologie numérique. Ľavènement de technologies submicroniques permettra de faire des progrès en direction de réseaux plus importants que ce que ľon sait intégrer actuellement, mais ce n’est probablement que dans ľassociation de réseaux indépendants travaillant en coopération dans des architectures hiérarchisées que ľon arrivera à implanter des fonctions de plus haut niveau, avec un fort parallélisme.

Mots clés

Architecture ordinateur Réseau neuronal Technologie submicronique Intégration Réalisation circuit 

Integration of architectures based on formal neural networks : a challenge for submicrometer technologies

Abstract

Connexionist architectures, using the formal neuron concept, may exhibit properties reminding of some simple cognitive functions. They may thus lead to new signal processing machines, implementing classification and generalization functionalities, or associative memories, for instance. Integration of these functions in silicon can be beneficial, if it is possible to solve connectivity issues and synaptic coefficients realisation, whether using analog or digital technologies. With coming submicrometer technologies, it will be possible to make progress towards networks more important than those which are integrated nowadays, but it is probably only through association of independant networks, cooperating within hierarchized architectures, that it will be possible to implement higher level functions, having a strong parallelism.

Key words

Computer architecture Neural network Submicron technology Integration Circuit realization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliographie

Références mentionnées dans le texte

  1. [1]
    McCulloch (W.S.), Pitts (W.). A logical calculus of the ideas immanent in nervous activity.Bull. Math. Biophys. (1943),5, pp. 115–133.MATHCrossRefMathSciNetGoogle Scholar
  2. [2]
    Hebb (D.O.). The organization of behaviour.Wiley (1949).Google Scholar
  3. [3]
    Rosenblatt (F.). The perceptron: a probabilistic model for information storage and organization in the brain.Psychological Rev. (1958),65, pp. 386–408.CrossRefMathSciNetGoogle Scholar
  4. [4]
    Duda (R.O.),Hart (P.E.). Pattern classification and scene analysis.Wiley (1973).Google Scholar
  5. [5]
    Rumelhart (D.E.),McClelland (J.L.). Parallel distributed processing.MIT Press (1986).Google Scholar
  6. [6]
    Hopfield (J.J.). Neural networks and physical systems with emergent collective computational abilities.Proc. Natl. Acad. Sci. USA (1982),79, pp. 2554–2558.CrossRefMathSciNetGoogle Scholar
  7. [7]
    Personnaz (L.), Guyon (I.), Dreyfus (G.). Collective computational properties of neural networks: new learning mechanisms.Phys. Rev. A (1986),34, pp. 4217–4228.CrossRefMathSciNetGoogle Scholar
  8. [8]
    Personnaz (L.). Etude de réseaux de neurones formels : conception, propriétés et applications.Thèse de Doctoral ďEtat, Université de Paris VI, 1986.Google Scholar
  9. [9]
    Amit (D.J.), Gutfreund (H.), Sompolinsky (H.). Storing infinite numbers of patterns in a spin-glass model of neural networks,Phys. Rev. Lett. (1985),55, n° 14, pp. 1530–1533.CrossRefGoogle Scholar
  10. [10]
    Ackley (D.H.), Hinton (G.E.), Sejnowski (T.J.). A learning algorithm for Boltzman machines.Cognitive Science (1985),9, pp. 147–169.CrossRefGoogle Scholar
  11. [11]
    Azencott (R.). Synchronous Boltzmann machines and Gibbs fields : learning algorithms.Proc. Les Arcs NATO Advanced Workshop on Neurocomputing, F. Fogelman, J. Hérault, eds.Springer Verlag (1990).Google Scholar
  12. [12]
    Sompolinsky (H.). Neural networks with non linear synapses and a static noise.Phys. Rev. A (1986),34, pp. 2571–2574.CrossRefGoogle Scholar
  13. [13]
    Hopfield (J.J.). Neurons with graded response have collective computational properties like those of two-state neurons.Proc. Natl. Acad. Sci. USA (1984),81, pp. 3088–3092.CrossRefGoogle Scholar
  14. [14]
    Steinbuch (K.). Die Lernmatrix.Kybernetik (1961),1, pp. 36–45.CrossRefGoogle Scholar
  15. [15]
    Schwartz (D.B.), Howard (R.E.), Denker (J.S.), Epworth (R.W.), Graf (H.P.), Hubbard (W.), Jackel (L.D.), Straughn (B.), Tennant (D.M.). Dynamics of microfabricated electronic neural networks.Appl. Phys. Lett. (1987),50, pp. 1110–1112.CrossRefGoogle Scholar
  16. [16]
    Graf (H.P.),de Vegvar (P.). A cmos implementation of a neural network model.Proc. Stanford Conf. on Advanced research on VLSI, P. Losleben, ed., 1987,MIT Press, pp. 351–367.Google Scholar
  17. [17]
    Graf (H.P.),Henderson (D.). A reconfigurable cmos neural network.IEEE Int. Solid-State Circ. Conf. (1990).Google Scholar
  18. [18]
    Holler (M.),Tam (S.),Castro (H.),Benson (R.). An electrically trainable artificial neural network with 10240 “floating gate” synapses.Proc. International Joint Conference on Neural Networks, Washington D.C. (1989).Google Scholar
  19. [19]
    Alspector (J.),Allen (R.B.),Hu (V.),Satyanarayana (S.). Stochastic learning networks and their electronic implementation.Neural Information Processing Systems, Natural and Synthetic, D.Z. Anderson, ed., American Institute of Physics (1988).Google Scholar
  20. [20]
    Graf (H.P.),Jackel (L.D.),Howard (R.E.),Straughn (J.S.),Denker (J.S.),Hubbard (W.),Tennant (D.M.),Schwartz (D.). vlsi implementation of a neural network memory with several hundreds of neurons.Neural networks for computing, J.S. Denker, ed.,American Institute of Physics (1986).Google Scholar
  21. [21]
    Spencer (E.G.). Programmable bistable switches and resistors for neural networks.Neural networks for computing, J.S. Denker, ed.,American Institute of Physics (1986).Google Scholar
  22. [22]
    Vignolle (J.M.),Chambost (E. de),Defrance (M.),Le Pesant (J.P.),Mourey (B.),Robin (Ph.),Micheron (F). A ferroelectric interconnection matrix for vlsi programmable neural network.Thomson-CSV LCR preprint, février 1989.Google Scholar
  23. [23]
    Tsividis (Y.P.), Anastassiou (D.). Switched capacitor neural network.Electronic Lett. (1987),23, pp. 958–959.CrossRefGoogle Scholar
  24. [24]
    Schwartz (D.B.), Howard (R.E.), Hubbard (W.E.). A programmable analog neural network chip.IEEE J. of Solid State Circ. (1989),24, pp. 313–319.CrossRefGoogle Scholar
  25. [25]
    Sage (J.P.),Thompson (K.),Withers (R.S.). An artificial network integrated circuit based on MNOS/CCD principles.In : Neural networks for computing, J.S. Denker ed.,American Institute of Physics (1986).Google Scholar
  26. [26]
    Murray (A.F.), Smith (A.V.W.). Asynchronous vlsi neural networks using pulse-stream arithmetic.IEEE J. of Solid State Circ. (1988),23, pp. 688–697.CrossRefGoogle Scholar
  27. [27]
    Del Corso (D.),Gregoretti (F.),Pellegrini (C.),Reyneri (L.M.). An artificial neural network based on multiplexed pulse streams.Proc. 1st Workshop on Microelectronics for Neural Networks, K. Goser, U. Ramacher, U. Rückert, eds., Dortmund (1990).Google Scholar
  28. [28]
    Weinfeld (M.). A fully digital CMOS integrated Hopfield network including the learning algorithm. International Workshop on vlsi for artificial intelligence, Oxford (GB), July 1988.In : vlsi for artificial intelligence, J.G. Delgado-Frias and W. Moore eds.,Kluwer Academic (1989).Google Scholar
  29. [29]
    Kung (S.Y.), Hwang (J.N.). Parallel architectures for artificial neural nets.Proc. IEEE International Conf. on Neural Networks (1988),11, pp. 165–172.CrossRefGoogle Scholar
  30. [30]
    Jones (S.),Thomaz (M.),Sammut (K.). Linear systolic neural network machine.IFIP Workshop on Parallel Architectures on Silicon, Grenoble (1989).Google Scholar
  31. [31]
    Blayo (F.),Hurat (P.). A systolic architecture dedicated to neural networks.Neural networks, from models to applications, L. Personnaz, G. Dreyfus, eds.,IDSET (1989).Google Scholar
  32. [32]
    Yasunaga (M.),Masuda (N.),Asai (M.),Yamada (M.),Hirai (Y). A wafer scale in integration neural network utilizing completely digital circuits.Proc. International Joint Conference on Neural Networks, Washington D.C. (1989).Google Scholar
  33. [33]
    Duranton (M.),Gobert (J.),Mauduit (N.). A digital vlsi module for neural networks.Neural networks, from models to applications. L. Personnaz, G. Dreyfus, eds.,IDSET (1989).Google Scholar
  34. [34]
    Alla (P.Y.),Dreyfus (G.),Gascuel (J.D.),Johannet (A.),Personnaz (L.),Roman (J.),Weinfeld (M.). Silicon integration of learning algorithm and other auto-adaptive properties in a digital feedback neural network.Proc. 1st Workshop on Microelectronics for Neural Networks,K. Goser, U. Ramacher, U. Rückert,eds., Dortmund (1990).Google Scholar
  35. [35]
    Sejnowski (T.J.), Rosenberg (C.M.). Parallel networks that learn to pronounce english text.Complex Systems (1987),1, pp. 145–168.MATHGoogle Scholar
  36. [36]
    Personnaz (L.),Johannet (A.),Dreyfus (G.),Weinfeld (M.). Towards a neural network chip : a performance assessment and a simple example.Neural networks, from models to applications. L. Personnaz, G. Dreyfus eds.,IDSET(1989).Google Scholar
  37. [37]
    Ouali (J.),Saucier (G.),Trilhe (J.). A flexible wafer scale network.ICCD Conference, Cambridge (USA) (1989).Google Scholar
  38. [38]
    Jessi Blue Book :Advanced Neural Circuits and Networks on Silicon (1990).Google Scholar
  39. [39]
    Peretto (P.), Niez (J.J.). Long term memory storage capacity of multiconnected neural networks.Biol. Cybernetics (1986),54, pp. 53–63.MATHCrossRefGoogle Scholar

Références additionnelles

  1. [39a]
    Anderson (D.Z.). Neural information processing systems.American Institute of Physics (1988).Google Scholar
  2. [39b]
    Anderson (J.A.),Rosenfeld (E.). Neurocomputing, foundations of research.MIT Press (1988).Google Scholar
  3. [39c]
    Del Corso (D.),Grosspietsch (K.E.),Treleaven (P.). European approaches to vlsi neural networks.IEEE Micro (Déc. 1989), 9, n° 6.Google Scholar
  4. [39d]
    Denker (J.S.). Neural networks for computing.American Institute of Physics (1986).Google Scholar
  5. [39e]
    Kohonen (T.). Self-organization and associative memory. Springer series in Information Science,Springer Verlag (1984).Google Scholar
  6. [39f]
    Lippmann (R.). An introduction to computing with neural nets.IEEE ASSP Magazine (avril 1987),4, n° 2, pp. 4–22.CrossRefGoogle Scholar
  7. [39g]
    Personnaz (L.), Dreyfus (G.). Neural networks: from models to applications.Editions IDSET, Paris (1989).Google Scholar
  8. [39h]
    Ramacher (U.). Introduction to vlsi design of artificial networks.Kluwer Academic publishers (1990).Google Scholar
  9. [39i]
    Touretzky (D.S.). Advances in neural information processing systems.Morgan Kaufman publishers, vol. 1 (1989), vol. 2 (1990).Google Scholar

Copyright information

© Institut Telecom / Springer-Verlag France 1991

Authors and Affiliations

  • Michel Weinfeld
    • 1
  1. 1.Laboratoire ďinformatiqueEcole polytechniquePalaiseau, Cedex

Personalised recommendations