Skip to main content
Log in

Intégration ďarchitectures à base de réseaux de neurones formels : un défi pour les technologies submicroniques

Integration of architectures based on formal neural networks : a challenge for submicrometer technologies

  • Published:
Annales des Télécommunications Aims and scope Submit manuscript

Résumé

Les architectures connexionnistes, utilisant le concept de neurone formel, peuvent présenter des propriétés rappelant certaines fonctions cognitives simples. Elles peuvent donc conduire à des machines nouvelles de traitement du signal, mettant en Œuvre des fonctions de classification avec généralisation ou de mémoire associative, par exemple. Ľimplantation de ces fonctions dans le silicium peut être avantageuse, si ľon arrive à surmonter les problèmes de connectivité et de réalisation des coefficients synaptiques, que ce soit en technologie analogique ou bien en technologie numérique. Ľavènement de technologies submicroniques permettra de faire des progrès en direction de réseaux plus importants que ce que ľon sait intégrer actuellement, mais ce n’est probablement que dans ľassociation de réseaux indépendants travaillant en coopération dans des architectures hiérarchisées que ľon arrivera à implanter des fonctions de plus haut niveau, avec un fort parallélisme.

Abstract

Connexionist architectures, using the formal neuron concept, may exhibit properties reminding of some simple cognitive functions. They may thus lead to new signal processing machines, implementing classification and generalization functionalities, or associative memories, for instance. Integration of these functions in silicon can be beneficial, if it is possible to solve connectivity issues and synaptic coefficients realisation, whether using analog or digital technologies. With coming submicrometer technologies, it will be possible to make progress towards networks more important than those which are integrated nowadays, but it is probably only through association of independant networks, cooperating within hierarchized architectures, that it will be possible to implement higher level functions, having a strong parallelism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Bibliographie

Références mentionnées dans le texte

  1. McCulloch (W.S.), Pitts (W.). A logical calculus of the ideas immanent in nervous activity.Bull. Math. Biophys. (1943),5, pp. 115–133.

    Article  MATH  MathSciNet  Google Scholar 

  2. Hebb (D.O.). The organization of behaviour.Wiley (1949).

  3. Rosenblatt (F.). The perceptron: a probabilistic model for information storage and organization in the brain.Psychological Rev. (1958),65, pp. 386–408.

    Article  MathSciNet  Google Scholar 

  4. Duda (R.O.),Hart (P.E.). Pattern classification and scene analysis.Wiley (1973).

  5. Rumelhart (D.E.),McClelland (J.L.). Parallel distributed processing.MIT Press (1986).

  6. Hopfield (J.J.). Neural networks and physical systems with emergent collective computational abilities.Proc. Natl. Acad. Sci. USA (1982),79, pp. 2554–2558.

    Article  MathSciNet  Google Scholar 

  7. Personnaz (L.), Guyon (I.), Dreyfus (G.). Collective computational properties of neural networks: new learning mechanisms.Phys. Rev. A (1986),34, pp. 4217–4228.

    Article  MathSciNet  Google Scholar 

  8. Personnaz (L.). Etude de réseaux de neurones formels : conception, propriétés et applications.Thèse de Doctoral ďEtat, Université de Paris VI, 1986.

  9. Amit (D.J.), Gutfreund (H.), Sompolinsky (H.). Storing infinite numbers of patterns in a spin-glass model of neural networks,Phys. Rev. Lett. (1985),55, n° 14, pp. 1530–1533.

    Article  Google Scholar 

  10. Ackley (D.H.), Hinton (G.E.), Sejnowski (T.J.). A learning algorithm for Boltzman machines.Cognitive Science (1985),9, pp. 147–169.

    Article  Google Scholar 

  11. Azencott (R.). Synchronous Boltzmann machines and Gibbs fields : learning algorithms.Proc. Les Arcs NATO Advanced Workshop on Neurocomputing, F. Fogelman, J. Hérault, eds.Springer Verlag (1990).

  12. Sompolinsky (H.). Neural networks with non linear synapses and a static noise.Phys. Rev. A (1986),34, pp. 2571–2574.

    Article  Google Scholar 

  13. Hopfield (J.J.). Neurons with graded response have collective computational properties like those of two-state neurons.Proc. Natl. Acad. Sci. USA (1984),81, pp. 3088–3092.

    Article  Google Scholar 

  14. Steinbuch (K.). Die Lernmatrix.Kybernetik (1961),1, pp. 36–45.

    Article  Google Scholar 

  15. Schwartz (D.B.), Howard (R.E.), Denker (J.S.), Epworth (R.W.), Graf (H.P.), Hubbard (W.), Jackel (L.D.), Straughn (B.), Tennant (D.M.). Dynamics of microfabricated electronic neural networks.Appl. Phys. Lett. (1987),50, pp. 1110–1112.

    Article  Google Scholar 

  16. Graf (H.P.),de Vegvar (P.). A cmos implementation of a neural network model.Proc. Stanford Conf. on Advanced research on VLSI, P. Losleben, ed., 1987,MIT Press, pp. 351–367.

  17. Graf (H.P.),Henderson (D.). A reconfigurable cmos neural network.IEEE Int. Solid-State Circ. Conf. (1990).

  18. Holler (M.),Tam (S.),Castro (H.),Benson (R.). An electrically trainable artificial neural network with 10240 “floating gate” synapses.Proc. International Joint Conference on Neural Networks, Washington D.C. (1989).

  19. Alspector (J.),Allen (R.B.),Hu (V.),Satyanarayana (S.). Stochastic learning networks and their electronic implementation.Neural Information Processing Systems, Natural and Synthetic, D.Z. Anderson, ed., American Institute of Physics (1988).

  20. Graf (H.P.),Jackel (L.D.),Howard (R.E.),Straughn (J.S.),Denker (J.S.),Hubbard (W.),Tennant (D.M.),Schwartz (D.). vlsi implementation of a neural network memory with several hundreds of neurons.Neural networks for computing, J.S. Denker, ed.,American Institute of Physics (1986).

  21. Spencer (E.G.). Programmable bistable switches and resistors for neural networks.Neural networks for computing, J.S. Denker, ed.,American Institute of Physics (1986).

  22. Vignolle (J.M.),Chambost (E. de),Defrance (M.),Le Pesant (J.P.),Mourey (B.),Robin (Ph.),Micheron (F). A ferroelectric interconnection matrix for vlsi programmable neural network.Thomson-CSV LCR preprint, février 1989.

  23. Tsividis (Y.P.), Anastassiou (D.). Switched capacitor neural network.Electronic Lett. (1987),23, pp. 958–959.

    Article  Google Scholar 

  24. Schwartz (D.B.), Howard (R.E.), Hubbard (W.E.). A programmable analog neural network chip.IEEE J. of Solid State Circ. (1989),24, pp. 313–319.

    Article  Google Scholar 

  25. Sage (J.P.),Thompson (K.),Withers (R.S.). An artificial network integrated circuit based on MNOS/CCD principles.In : Neural networks for computing, J.S. Denker ed.,American Institute of Physics (1986).

  26. Murray (A.F.), Smith (A.V.W.). Asynchronous vlsi neural networks using pulse-stream arithmetic.IEEE J. of Solid State Circ. (1988),23, pp. 688–697.

    Article  Google Scholar 

  27. Del Corso (D.),Gregoretti (F.),Pellegrini (C.),Reyneri (L.M.). An artificial neural network based on multiplexed pulse streams.Proc. 1st Workshop on Microelectronics for Neural Networks, K. Goser, U. Ramacher, U. Rückert, eds., Dortmund (1990).

  28. Weinfeld (M.). A fully digital CMOS integrated Hopfield network including the learning algorithm. International Workshop on vlsi for artificial intelligence, Oxford (GB), July 1988.In : vlsi for artificial intelligence, J.G. Delgado-Frias and W. Moore eds.,Kluwer Academic (1989).

  29. Kung (S.Y.), Hwang (J.N.). Parallel architectures for artificial neural nets.Proc. IEEE International Conf. on Neural Networks (1988),11, pp. 165–172.

    Article  Google Scholar 

  30. Jones (S.),Thomaz (M.),Sammut (K.). Linear systolic neural network machine.IFIP Workshop on Parallel Architectures on Silicon, Grenoble (1989).

  31. Blayo (F.),Hurat (P.). A systolic architecture dedicated to neural networks.Neural networks, from models to applications, L. Personnaz, G. Dreyfus, eds.,IDSET (1989).

  32. Yasunaga (M.),Masuda (N.),Asai (M.),Yamada (M.),Hirai (Y). A wafer scale in integration neural network utilizing completely digital circuits.Proc. International Joint Conference on Neural Networks, Washington D.C. (1989).

  33. Duranton (M.),Gobert (J.),Mauduit (N.). A digital vlsi module for neural networks.Neural networks, from models to applications. L. Personnaz, G. Dreyfus, eds.,IDSET (1989).

  34. Alla (P.Y.),Dreyfus (G.),Gascuel (J.D.),Johannet (A.),Personnaz (L.),Roman (J.),Weinfeld (M.). Silicon integration of learning algorithm and other auto-adaptive properties in a digital feedback neural network.Proc. 1st Workshop on Microelectronics for Neural Networks,K. Goser, U. Ramacher, U. Rückert,eds., Dortmund (1990).

  35. Sejnowski (T.J.), Rosenberg (C.M.). Parallel networks that learn to pronounce english text.Complex Systems (1987),1, pp. 145–168.

    MATH  Google Scholar 

  36. Personnaz (L.),Johannet (A.),Dreyfus (G.),Weinfeld (M.). Towards a neural network chip : a performance assessment and a simple example.Neural networks, from models to applications. L. Personnaz, G. Dreyfus eds.,IDSET(1989).

  37. Ouali (J.),Saucier (G.),Trilhe (J.). A flexible wafer scale network.ICCD Conference, Cambridge (USA) (1989).

  38. Jessi Blue Book :Advanced Neural Circuits and Networks on Silicon (1990).

  39. Peretto (P.), Niez (J.J.). Long term memory storage capacity of multiconnected neural networks.Biol. Cybernetics (1986),54, pp. 53–63.

    Article  MATH  Google Scholar 

Références additionnelles

  1. Anderson (D.Z.). Neural information processing systems.American Institute of Physics (1988).

  2. Anderson (J.A.),Rosenfeld (E.). Neurocomputing, foundations of research.MIT Press (1988).

  3. Del Corso (D.),Grosspietsch (K.E.),Treleaven (P.). European approaches to vlsi neural networks.IEEE Micro (Déc. 1989), 9, n° 6.

  4. Denker (J.S.). Neural networks for computing.American Institute of Physics (1986).

  5. Kohonen (T.). Self-organization and associative memory. Springer series in Information Science,Springer Verlag (1984).

  6. Lippmann (R.). An introduction to computing with neural nets.IEEE ASSP Magazine (avril 1987),4, n° 2, pp. 4–22.

    Article  Google Scholar 

  7. Personnaz (L.), Dreyfus (G.). Neural networks: from models to applications.Editions IDSET, Paris (1989).

    Google Scholar 

  8. Ramacher (U.). Introduction to vlsi design of artificial networks.Kluwer Academic publishers (1990).

  9. Touretzky (D.S.). Advances in neural information processing systems.Morgan Kaufman publishers, vol. 1 (1989), vol. 2 (1990).

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Weinfeld, M. Intégration ďarchitectures à base de réseaux de neurones formels : un défi pour les technologies submicroniques. Ann. Télécommun. 46, 142–155 (1991). https://doi.org/10.1007/BF02995443

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02995443

Mots clés

Key words

Navigation