Advertisement

Neuromorphic Learning VLSI Systems: A Survey

  • Gert Cauwenberghs
Part of the The Springer International Series in Engineering and Computer Science book series (SECS, volume 447)

Abstract

Carver Mead introduced “neuromorphic engineering” [1] as an interdisciplinary approach to the design of biologically inspired neural information processing systems, whereby neurophysiological models of perception and information processing in biological systems are mapped onto analog VLSI systems that not only emulate their functions but also resemble their structure [18]. The motivation for emulating neural function and structure in analog VLSI is the realization that challenging tasks of perception, classification, association and control successfully performed by living organisms can only be accomplished in artificial systems by using an implementation medium that matches their structure and organization.

Keywords

Neural Network IEEE Transaction Neural Information Processing System Systolic Array VLSI Architecture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    C. A. Mead. Neuromorphic electronic systems. In Proceedings of the IEEE, volume 78-10, pages 1629–1639, 1990.CrossRefGoogle Scholar
  2. [2]
    C. A. Mead. Analog VLSI and Neural Systems. Addison-Wesley, Reading, MA, 1989.MATHGoogle Scholar

Neurobiological Inspiration

  1. [3]
    G. M. Shepherd. The Synaptic Organization of the Brain. Oxford Univ. Press, New York, 3 edition, 1992.Google Scholar
  2. [4]
    P. Churchland and T. Sejnowski. The Computational Brain. MIT Press, 1990.Google Scholar
  3. [5]
    S. R. Kelso and T. H. Brown. Differential conditioning of associative synaptic enhancement in hippocampal brain slices. Science, 232:85–87, 1986.CrossRefGoogle Scholar
  4. [6]
    R. D. Hawkins, T. W. Abrams, T. J. Carew, and E. R. Kandell. A cellular mechanism of classical conditioning in aplysia: Activity-dependent amplification of presynaptic facilitation. Science, 219:400–405, 1983.CrossRefGoogle Scholar
  5. [7]
    P. R. Montague, P. Dayan, C. Person, and T. J. Sejnowski. Bee foraging in uncertain environments using predictive hebbian learning. Nature, 377(6551):725–728, 1996.CrossRefGoogle Scholar

Edited Book Volumes, Journal Issues and Reviews

  1. [8]
    C. A. Mead and M. Ismail, editors. Analog VLSI Implementation of Neural Systems. Kluwer, Norwell, MA, 1989.Google Scholar
  2. [9]
    N. Morgan, editor. Artificial Neural Networks: Electronic Implementations. IEEE Computer Society Press, CA, Los Alamitos, 1990.Google Scholar
  3. [10]
    E. Sánchez-Sinencio and C. Lau, editors. Artificial Neural Networks: Electronic Implementations. IEEE Computer Society Press, 1992.Google Scholar
  4. [11]
    M.A. Jabri, R.J. Coggins, and B.G. Flower. Adaptive Analog VLSI Neural Systems. Chapman Hall, London, UK, 1996.Google Scholar
  5. [12]
    E. Sánchez-Sinencio and R. Newcomb. Special issue on neural network hardware. In IEEE Transactions on Neural Networks, volume 3-3. IEEE Press, 1992.Google Scholar
  6. [13]
    E. Sánchez-Sinencio and R. Newcomb. Special issue on neural network hardware. In IEEE Transactions on Neural Networks, volume 4-3. IEEE Press, 1993.Google Scholar
  7. [14]
    T. S. Lande, editor. Special Issue on Neuromorphic Engineering. Int. J. Analog Int. Circ. Signal Proc., March 1997.Google Scholar
  8. [15]
    M. Bayoumi G. Cauwenberghs and E. Sánchez-Sinencio, editors. Special Issue on Learning in Silicon. Int. J. Analog Int. Circ. Signal Proc., To appear.Google Scholar
  9. [16]
    G. Cauwenberghs et. al. Learning on silicon. In special session, Proc. Int. Symp. Circuits and Systems, Hong Kong, June 1997.Google Scholar
  10. [17]
    H. P. Graf and L. D. Jackel. Analog electronic neural network circuits. IEEE Circuits and Devices Mag., 5:44–49, 1989.CrossRefGoogle Scholar
  11. [18]
    G. Cauwenberghs. Adaptation, learning and storage in analog VLSI. In Proceedings of the Ninth Annual IEEE International ASIC Conference, Rochester, NY, September 1996.Google Scholar

Learning Models Supervised Learning

  1. [19]
    B. Widrow and M. E. Hoff. Adaptive switching circuits. IRE WESCON Convention Record, 4:96–104, 1960.Google Scholar
  2. [20]
    P. Werbos. Beyond regression: New tools for prediction and analysis in the behavioral sciences. In The Roots of Backpropagation. Wiley, New York, 1993.Google Scholar
  3. [21]
    D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, editors, Parallel Distributed Processing: Explorations in the Microstructures of Cognition, volume I: Foundations. MIT Press/Bradford Books, Cambridge, MA, 1986.Google Scholar
  4. [22]
    G. E. Hinton and T. J. Sejnowski. Learning and relearning in boltzman machines. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, Explorations in the Microstructure of Cognition, volume 1. MIT Press, Cambridge, MA, 1986.Google Scholar
  5. [23]
    R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2):270–280, 1989.Google Scholar
  6. [24]
    B. A. Pearlmutter. Learning state space trajectories in recurrent neural networks. Neural Computation, 1(2):263–269, 1989.Google Scholar

Unsupervised Learning

  1. [25]
    D. O. Hebb. The Organization of Behavior. Wiley, New York, NY, 1949.Google Scholar
  2. [26]
    J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. In Proc. Natl. Acad. Sci., volume 97, pages 2554–2558, 1982.CrossRefMathSciNetGoogle Scholar
  3. [27]
    T. Kohonen. Self-Organisation and Associative Memory. Springer-Verlag, Berlin, 1984.Google Scholar
  4. [28]
    A. Gersho and R. M. Gray. Vector Quantization and Signal Compression. Kluwer, Norwell, MA, 1992.MATHGoogle Scholar
  5. [29]
    R. Linsker. Self-organization in a perceptual network. IEEE Computer, 21:105–117, 1988.Google Scholar
  6. [30]
    G. A. Carpenter. Neural network models for pattern-recognition and associative memory. Neural Networks, 2(4):243–257, 1989.CrossRefGoogle Scholar
  7. [31]
    C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.Google Scholar

Reinforcement Learning and Related Models

  1. [32]
    K. S. Narendra and M. A. L. Thatachar. Learning automata—a survey. In IEEE T. Syst. Man and Cybern., volume SMC-4, pages 323–334, 1974.Google Scholar
  2. [33]
    S. Grossberg. A neural model of attention, reinforcement, and discrimination learning. International Review of Neurobiology, 18:263–327, 1975.Google Scholar
  3. [34]
    A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, 13(5):834–846, 1983.Google Scholar
  4. [35]
    S. Grossberg and D. S. Levine. Neural dynamics of attentionally modulated pavlovian conditioning: Blocking, inter-stimulus interval, and secondary reinforcement. Applied Optics, 26:5015–5030, 1987.Google Scholar
  5. [36]
    R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988.Google Scholar
  6. [37]
    P. J. Werbos. A menu of designs for reinforcement learning over time. In W. T. Miller, R. S. Sutton, and P. J. Werbos, editors, Neural Networks for Control, pages 67–95. MIT Press, Cambridge, MA, 1990.Google Scholar
  7. [38]
    W. T. Miller, R. Sutton, and P. Werbos, editors. Neural Networks for Control. MIT Press, Cambridge, MA:, 1990.Google Scholar
  8. [39]
    C. Watkins and P. Dayan. Q-learning. Machine Learning, 8:279–292, 1992.MATHGoogle Scholar
  9. [40]
    W.-M. Shen. Autonomous Learning from the Environment. Freeman, Computer Science Press, New York, NY, 1994.Google Scholar

Hybrid Learning Approaches

  1. [41]
    G. A. Carpenter et al. Fuzzy artmap — a neural network architecture for incremental supervised learning of analog multidimentional maps. IEEE Transactions on Neural Networks, 3(5):698–713, 1992.CrossRefGoogle Scholar
  2. [42]
    D. White and D. Sofge, editors. Handbook of Intelligent Control: Neural, Adaptive and Fuzzy Approaches. Van Nostrand, New York, 1992.Google Scholar
  3. [43]
    P. J. Werbos. Neurocontrol and elastic fuzzy logic: Capabilities, concepts, and applications. IEEE Transactions on Industrial Electronics, 40(2):170–180, 1993.CrossRefMathSciNetGoogle Scholar
  4. [44]
    M. Jordan and R. Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural Computation, 6:181–214, 1994.CrossRefGoogle Scholar
  5. [45]
    R. M. Sanner and J. J. E. Slotine. Gaussian networks for direct adaptive control. IEEE Transactions on Neural Networks, 3(6):837–864, 1992.CrossRefGoogle Scholar

Technology Subthreshold MOS Operation

  1. [46]
    A. L. Hodgkin and A. F. Huxley. Current carried by sodium and potassium ions through the membrane of the giant axon of loligo. Journal of Physiology, 1952.Google Scholar
  2. [47]
    E. Vittoz and J. Fellrath. CMOS analog integrated circuits based on weak inversion operation. IEEE Journal on Solid-State Circuits, 12(3):224–231, 1977.CrossRefGoogle Scholar
  3. [48]
    A. G. Andreou, K. A. Boahen, P. O. Pouliquen, A. Pavasović, R. E. Jenkins, and K. Strohbehn. Current-mode subthreshold MOS circuits for analog VLSI neural systems. IEEE Transactions on Neural Networks, 2(2):205–213, 1991.CrossRefGoogle Scholar

Analog Storage

  1. [49]
    Y. Horio and S. Nakamura. Analog memories for VLSI neurocomputing. In E. Sánchez-Sinencio and C. Lau, editors, Artificial Neural Networks: Paradigms, Applications, and Hardware Implementations, pages 344–363. IEEE Press, 1992.Google Scholar
  2. [50]
    E. Vittoz, H. Oguey, M. A. Maher, O. Nys, E. Dijkstra, and M. Chevroulet. Analog storage of adjustable synaptic weights. In VLSI Design of Neural Networks, pages 47–63. Kluwer Academic, Norwell MA, 1991.Google Scholar
  3. [51]
    M. A. Holler. VLSI implementations of learning and memory systems, In Advances in Neural Information Processing Systems, volume 3, pages 993–1000. Morgan Kaufman, San Mateo, CA, 1991.Google Scholar

Non-Volatile Analog Storage

  1. [52]
    A. Kramer, C. K. Sin, R. Chu, and P. K. Ko. Compact eeprom-based weight functions. In Advances in Neural Information Processing Systems, volume 3, pages 1001–1007. Morgan Kaufman, San Mateo, CA, 1991.Google Scholar
  2. [53]
    D. A. Kerns, J. E. Tanner, M. A. Sivilotti, and J. Luo. CMOS UV-writable non-volatile analog storage. In Proc. Advanced Research in VLSI Int. Conf., Santa Cruz CA, 1991.Google Scholar
  3. [54]
    A. Soennecken, U. Hilleringmann, and K. Goser. Floating gate structures as nonvolatile analog memory cells in 1.0µm-LOCOS-CMOS technology with PZT dielectrica. Microel Eng, 15:633–636, 1991.CrossRefGoogle Scholar
  4. [55]
    B. W. Lee, B. J. Sheu, and H. Yang. Analog floating-gate synapses for general-purpose VLSI neural computation. IEEE Trans. on Circuits and Systems, 38:654–658, 1991.CrossRefGoogle Scholar
  5. [56]
    D. A. Durfee and F. S. Shoucair. Low programming voltage floating gate analog memory cells in standard VLSI CMOS technology. Electronics Letters, 28(10):925–927, May 1992.CrossRefGoogle Scholar
  6. [57]
    R. G. Benson. Analog VLSI Suprevised Learning System. PhD thesis, California Institute of Technology, 1993.Google Scholar
  7. [58]
    O. Fujita and Y. Amemiya. A floating-gate analog memory device for neural networks. IEEE Device, 40(11):2029–2055, November 1993.CrossRefGoogle Scholar
  8. [59]
    A. Thomsen and M. A. Brooke. Low control voltage programming of floating-gate mosfets and applications. IEEE Circ I, 41(6):443–452, June 1994.CrossRefGoogle Scholar
  9. [60]
    P. Hasler, C. Diorio, B. A. Minch, and C. Mead. Single transistor learning synapses. In Advances in Neural Information Processing Systems 7, pages 817–824. MIT Press, Cambridge, MA, 1995.Google Scholar
  10. [61]
    H. Won, Y. Hayakawa, K. Nakajima, and Y. Sawada. ’switched diffusion analog memory for neural networks with hebbian learning-function and its linear-operation. IEICE T. Fund. El. Comm. Comp. Sci.d Elect Commun Comp Sci, E79A(6):746–751, June 1996.Google Scholar

Volatile Analog Storage and Refresh

  1. [62]
    D. B. Schwartz, R. E. Howard, and W. E. Hubbard. A programmable analog neural network chip. IEEE J. Solid-State Circuits, 24:313–319, 189.Google Scholar
  2. [63]
    B. Hochet, V. Peiris, S. Abdo, and M. J. Declercq. Implementation of a learning kohonen neuron based on a new multilevel storage technique. IEEE J. Solid-State Circuits, 26(3):262–267, 1991.CrossRefGoogle Scholar
  3. [64]
    R. Castello, D. D. Caviglia, M. Franciotta, and F. Montecchi. Selfrefreshing analog memory cell for variable synaptic weights. Electronics Letters, 27(20):1871–1873, 1991.CrossRefGoogle Scholar
  4. [65]
    G. Cauwenberghs and A. Yariv. Fault-tolerant dynamic multi-level storage in analog VLSI. IEEE Transactions on Circuits and Systems II, 41(12):827–829, 1994.CrossRefGoogle Scholar
  5. [66]
    G. Cauwenberghs. A micropower CMOS algorithmic A/D/A converter. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 42(11):913–919, 1995.CrossRefGoogle Scholar
  6. [67]
    J. G. Elias, D. P. M. Northmore, and W. Westerman. An analog memory device for spiking silicon neurons. Neural Computation, 9:419–440, 1997.CrossRefGoogle Scholar

Emerging VLSI Technologies

  1. [68]
    B. Gupta, R. Goodman, F. Jiang, Y. C. Tai, S. Tung, and C. M. Ho. Analog VLSI system for active drag reduction. IEEE Micro Mag., 16(5):53–59, October 1996.CrossRefGoogle Scholar
  2. [69]
    T. Distefano and J. Fjelstad. Chip-scale packaging meets future design needs. Solid State Tech., 39(4):82, April 1996.Google Scholar
  3. [70]
    B. Elkareh, B. Chen, and T. Stanley. Silicon-on-insulator — an emerging high-leverage technology. IEEE T. Comp. Pack. Man. Techn. Part A, 18(1):187–194, March 1995.CrossRefGoogle Scholar
  4. [71]
    C. M. Hu. Soi (silicon-on-insulator) for high-speed ultra large-scale integration. Japan JAP 1, 33(1B):365–369, January 1994.Google Scholar

Architecture Outer-Product Supervised Learning

  1. [72]
    J. Alspector, B. Gupta, and R. B. Allen. Performance of a stochastic learning microchip. In Advances in Neural Information Processing Systems, volume 1, pages 748–760. Morgan Kaufman, San Mateo, CA, 1989.Google Scholar
  2. [73]
    F. M. A. Salam and Y. W. Wang. A real-time experiment using a 50-neuron CMOS analog silicon chip with on-chip digital learning. IEEE T. Neural Networks, 2(4):461–464, 1991.CrossRefGoogle Scholar
  3. [74]
    C. R. Schneider and H. C. Card. CMOS mean field learning. Electronics Letters, 27(19):1702–1704, 1991.CrossRefGoogle Scholar
  4. [75]
    G. Cauwenberghs, C. F. Neugebauer, and A. Yariv. Analysis and verification of an analog VLSI outer-product incremental learning system. IEEE Transactions on Neural Networks, 3(3):488–497, 1992.CrossRefGoogle Scholar
  5. [76]
    S. P. Eberhardt, R. Tawel, T. X. Brown, T. Daud, and A. P. Thakoor. Analog VLSI neural networks — implementation issues and examples in optimization and supervisvised learning. IEEE T. Ind. El., 39(6):552–564, December 1992.CrossRefGoogle Scholar
  6. [77]
    Y. Arima, M. Murasaki, T. Yamada, A. Maeda, and H. Shinohara. A refreshable analog VLSI neural network chip with 400 neurons and 40k synapses. IEEE J. of Solid State Circuits, 27:1854–1861, 1992.CrossRefGoogle Scholar
  7. [78]
    R. G. Benson and D. A. Kerns. UV-activated conductances allow for multiple time scale learning. IEEE Transactions on Neural Networks, 4(3):434–440, 1993.CrossRefGoogle Scholar
  8. [79]
    K. Soelberg, R. L. Sigvartsen, T. S. Lande, and Y. Berg. An analog continuous-time neural-network. Int. J. Analog Integ. Circ. Signal Proc, 5(3):235–246, May 1994.CrossRefGoogle Scholar
  9. [80]
    T. Morie and Y. Amemiya. An all-analog expandable neural-network lsi with on-chip backpropagation learning. IEEE J. Solid-State Circuits, 29(9):1086–1093, September 1994.CrossRefGoogle Scholar
  10. [81]
    F. J. Kub and E. W. Justh. Analog CMOS implementation of high-frequency least-mean square error learning circuit. IEEE J. Solid-State Circuits, 30(12):1391–1398, December 1995.CrossRefGoogle Scholar
  11. [82]
    Y. Berg, R. L. Sigvartsen, T. S. Lande, and Å. Abusland. An analog feedforward neural-network with on-chip learning. Int. J. Analog Integ. Circ. Signal Proc, 9(1):65–75, January 1996.CrossRefGoogle Scholar
  12. [83]
    J. W. Cho, Y. K. Choi, and S. Y. Lee. Modular neuro-chip with on-chip learning and adjustable learning parameters. Neural Proc. Letters, 4(1), 1996.Google Scholar
  13. [84]
    M. Valle, D. D. Caviglia, and G. M. Bisio. An experimental analog VLSI neural-network with on-chip backpropagation learning. Int. J. Analog Integ. Circ. Signal Proc., 9(3):231–245, April 1996.Google Scholar

Outer-Product Unsupervised Learning

  1. [85]
    J. P. Sage and R. S. Withers. Analog nonvolatile memory for neural network implementations. In Artificial Neural Networks: Electronic Implementations, pages 22–32. IEEE Computer Society Press, CA, Los Alamitos, 1990.Google Scholar
  2. [86]
    K. A. Boahen, P. O. Pouliquen, A. G. Andreou, and R. E. Jenkins. A heteroassociative memory using current-mode MOS analog VLSI circuits. IEEE T. Circ. Syst, 36(5):747–755, 1989.CrossRefGoogle Scholar
  3. [87]
    J. R. Mann and S. Gilbert. An analog self-organizing neural network chip. In Advances in Neural Information Processing Systems, volume 1, pages 739–747. Morgan Kaufman, San Mateo, CA, 1989.Google Scholar
  4. [88]
    A. Hartstein and R. H. Koch. A self-learning neural network. In Advances in Neural Information Processing Systems, volume 1, pages 769–776. Morgan Kaufman, San Mateo, CA, 1989.Google Scholar
  5. [89]
    M. R. Walker, S. Haghighi, A. Afghan, and L. A. Akers. Training a limited-interconnect, synthetic neural ic. In Advances in Neural Information Processing Systems, volume 1, pages 777–784. Morgan Kaufman, San Mateo, CA, 1989.Google Scholar
  6. [90]
    A. Murray. Pulse arithmetic in VLSI neural networks. IEEE Micro Mag., pages 64–74, December 1989.Google Scholar
  7. [91]
    Y. Arima, K. Mashiko, K. Okada, T. Yamada, A. Maeda, and et al. A 336-neuron, 28k-synapse, self-learning neural network chip with branch-neuron-unit architecture. IEEE J. Solid-State Circuits, 26(11):1637–1644, 1991.CrossRefGoogle Scholar
  8. [92]
    B. J. Maundy and E. I. Elmasry. A self-organizing switched-capacitor neural network. IEEE T. Circ. Syst., 38(12):1556–1563, December 1991.CrossRefGoogle Scholar
  9. [93]
    D. A. Watola and J. L. Meador. Competitive learning in asynchronous-pulse-density integrated-circuits. Int. J. Analog Integ. Circ. Signal Proc., 2(4):323–344, November 1992.Google Scholar
  10. [94]
    J. Donald and L. Akers. An adaptive neural processor node. IEEE Transactions on Neural Networks, 4(3):413–426, 1993.CrossRefGoogle Scholar
  11. [95]
    Y. He and U. Cilingiroglu. A charge-based on-chip adaptation kohonen neural network. IEEE Transactions on Neural Networks, 4(3):462–469, 1993.CrossRefGoogle Scholar
  12. [96]
    D. Macq, M. Verleysen, P. Jespers, and J. D. Legat. Analog implementation of a kohonen map with on-chip learning. IEEE T. Neural Networks, 4(3):456–461, May 1993.CrossRefGoogle Scholar
  13. [97]
    B. Linares-Barranco, E. Sánchez-Sinencio, A. Rodriguez-Vazquez, and J. L. Huertas. A CMOS analog adaptive bam with on-chip learning and weight refreshing. IEEE Trans. on Neural Networks, 4:445–457, 1993.CrossRefGoogle Scholar
  14. [98]
    P. Heim and E. A. Vittoz. Precise analog synapse for kohonen feature maps. IEEE J. Solid-State Circuits, 29(8):982–985, August 1994.CrossRefGoogle Scholar
  15. [99]
    G. Cauwenberghs and V. Pedroni. A charge-based CMOS parallel analog vector quantizer. In Advances in Neural Information Processing Systems, volume 7, pages 779–786, Cambridge, MA, 1995. MIT Press.Google Scholar
  16. [100]
    T. Shibata, H. Kosaka, H. Ishii, and T. Ohmi. A neuron-MOS neural-network using self-learning-compatible synapse circuits. IEEE J. Solid-State Circuits, 30(8):913–922, August 1995.CrossRefGoogle Scholar
  17. [101]
    R. Y. Liu, C. Y. Wu, and I. C. Jou. A CMOS current-mode design of modified learning-vector-quantization neural networks. Int. J. Analog Integ. Circ. Signal Proc., 8(2):157–181, September 1995.CrossRefGoogle Scholar
  18. [102]
    C. Y. Wu and J. F. Lan. MOS current-mode neural associative memory design with on-chip learning. IEEE T. Neural Networks, 7(1):157–181, January 1996.MathSciNetGoogle Scholar
  19. [103]
    K. Hosono, K. Tsuji, K. Shibao, E. Io, and H. Yonezu et al. Fundamental device and circuits for synaptic connections in self-organizing neural networks. IEICE T. Electronics, E79C(4):560–567, April 1996.Google Scholar
  20. [104]
    T. Serrano-Gotarredona and B. Linares-Barranco. A real-time clustering microchip neural engine. IEEE T. VLSI Systems, 4(2):195–209, June 1996.CrossRefGoogle Scholar

Adaptive Cellular Neural Networks

  1. [105]
    P. Tzionas, P. Tsalides, and A. Thanailakis. Design and VLSI implementation of a pattern classifier using pseudo-2d cellular automata. IIEE Proc G, 139(6):661–668, December 1992.Google Scholar
  2. [106]
    T. Roska and L. O. Chua. The CNN universal machine — an analogic array computer. IEEE T. Circ. Syst. II, 40(3):163–173, March 1993.MATHCrossRefMathSciNetGoogle Scholar
  3. [107]
    Y. Miyanaga and K. Tochinai. Parallel VLSI architecture for multilayer self-organizing cellular network. IEICE T. Electronics, E76C(7):1174–1181, July 1993.Google Scholar
  4. [108]
    S. Espejo, R. Carmona, R. Dominguez-Castro, and A. Rodriguez-Vazquez. A CNN universal chip in CMOS technology. Int J. Circuit Theory Appl., 24(1):93–109, 1996.CrossRefGoogle Scholar

Adaptive Fuzzy Classifiers

  1. [109]
    J. W. Fattaruso, S. S. Mahant-Shetti, and J. B. Barton. A fuzzy logic inference processor. IEEE Journal of Solid-State Circuits, 29(4):397–401, 1994.CrossRefGoogle Scholar
  2. [110]
    Z. Tang, Y. Kobayashi, O. Ishizuka, and K. Tanno. A learning fuzzy network and its applications to inverted pendulum system. IEICE T. Fund. El. Comm. Comp. Sci., E78A(6):701–707, June 1995.Google Scholar
  3. [111]
    F. Vidal-Verdu and A. Rodriguez-Vazquez. Using building blocks to design analog neuro-fuzzy controllers. IEEE Micro, 15(4):49–57, August 1995.CrossRefGoogle Scholar
  4. [112]
    W. Pedrycz, C. H. Poskar, and P. J. Czezowski. A reconfigurable fuzzy neural-network with in-situ learning. IEEE Micro, 15(4):19–30, August 1995.CrossRefGoogle Scholar
  5. [113]
    T. Yamakawa. Silicon implementation of a fuzzy neuron. IEEE Fuz Sy, 4(4):488–501, November 1996.CrossRefMathSciNetGoogle Scholar

Reinforcement Learning

  1. [114]
    C. Schneider and H. Card. Analog CMOS synaptic learning circuits adapted from invertebrate biology. IEEE T. Circ. Syst., 38(12):1430–1438, December 1991.CrossRefGoogle Scholar
  2. [115]
    T. G. Clarkson, C. K. Ng, and Y. Guan. The pram: An adaptive VLSI chip. IEEE Trans. on Neural Networks, 4(3):408–412, 1993.CrossRefGoogle Scholar
  3. [116]
    A. F. Murray, S. Churcher, A. Hamilton, A. J. Holmes, and G. B. Jackson et al. Pulse stream VLSI neural networks. IEEE Micro, 14(3):29–39, June 1994.CrossRefGoogle Scholar
  4. [117]
    G. Cauwenberghs. Reinforcement learning in a nonlinear noise shaping oversampled A/D converter. In Proc. Int. Symp. Circuits and Systems, Hong Kong, June 1997.Google Scholar

Nonidealities and Error Models

  1. [118]
    M. J. S. Smith. n analog integrated neural network capable of learning the feigenbaum logistic map. IEEE Transactions on Circuits and Systems, 37(6):841–844, 1990.CrossRefGoogle Scholar
  2. [119]
    R. C. Frye, E. A. Rietman, and C.C. Wong. Back-propagation learning and nonidealities in analog neural network hardware. IEEE Transactions on Neural Networks, 2(1):110–117, 1991.CrossRefGoogle Scholar
  3. [120]
    L. M. Reyneri and E. Filippi. An analysis on the performance of silicon implementations of backpropagation algorithms for artificial neural networks. IEEE Comput, 40(12):1380–1389, 1991.CrossRefGoogle Scholar
  4. [121]
    A. Murray and P. J. Edwards. Synaptic noise during mlp training enhances fault-tolerance, generalization and learning trajectory. In Advances in Neural Information Processing Systems, volume 5, pages 491–498. Morgan Kaufman, San Mateo, CA, 1993.Google Scholar
  5. [122]
    P. Thiran and M. Hasler. Self-organization of a one-dimensional kohonen network with quantized weights and inputs. Neural Networks, 7(9):1427–1439, 1994.CrossRefGoogle Scholar
  6. [123]
    G. Cairns and L. Tarassenko. Precision issues for learning with analog VLSI multilayer perceptrons. IEEE Micro, 15(3):54–56, June 1995.Google Scholar
  7. [124]
    B. K. Dolenko and H. C. Card. Tolerance to analog hardware of on-chip learning in backpropagation networks. IEEE T. Neural Networks, 6(5):1045–1052, September 1995.CrossRefGoogle Scholar

Model-Free Learning

  1. [125]
    A. Dembo and T. Kailath. Model-free distributed learning. IEEE Transactions on Neural Networks, 1(1):58–70, 1990.CrossRefGoogle Scholar
  2. [126]
    M. Jabri and B. Flower. Weight perturbation: An optimal architecture and learning technique for analog VLSI feedforward and recurrent multi-layered networks. IEEE Transactions on Neural Networks, 3(1):154–157, 1992.CrossRefGoogle Scholar
  3. [127]
    G. Cauwenberghs. A fast stochastic error-descent algorithm for supervised learning and optimization. In Advances in Neural Information Processing Systems, volume 5, pages 244–251, San Mateo, CA, 1993. Morgan Kaufman.Google Scholar
  4. [128]
    J. Alspector, R. Meir, B. Yuhas, and A. Jayakumar. A parallel gradient descent method for learning in analog VLSI neural networks. In Advances in Neural Information Processing Systems, volume 5, pages 836–844, San Mateo, CA, 1993. Morgan Kaufman.Google Scholar
  5. [129]
    B. Flower and M. Jabri. Summed weight neuron perturbation: An ≀(n) improvement over weight perturbation. In Advances in Neural Information Processing Systems, volume 5, pages 212–219, San Mateo, CA, 1993. Morgan Kaufman.Google Scholar
  6. [130]
    D. Kirk, D. Kerns, K. Fleischer, and A. Barr. Analog VLSI implementation of gradient descent. In Advances in Neural Information Processing Systems, volume 5, pages 789–796, San Mateo, CA, 1993. Morgan Kaufman.Google Scholar
  7. [131]
    G. Cauwenberghs. A learning analog neural network chip with continuous-recurrent dynamics. In Advances in Neural Information Processing Systems, volume 6, pages 858–865, San Mateo, CA, 1994. Morgan Kaufman.Google Scholar
  8. [132]
    P. Hollis and J. Paulos. A neural network learning algorithm tailored for VLSI implementation. IEEE Tran. Neural Networks, 5(5):784–791, 1994.CrossRefGoogle Scholar
  9. [133]
    G. Cauwenberghs. An analog VLSI recurrent neural network learning a continuous-time trajectory. IEEE Transactions on Neural Networks, 7(2), March 1996.Google Scholar
  10. [134]
    A. J. Montalvo, R. S. Gyurcsik, and J. J. Paulos. Toward a general-purpose analog VLSI neural-network with on-chip learning. IEEE T. Neural Networks, 8(2):413–423, March 1997.CrossRefGoogle Scholar

Chip-in-the-Loop Training

  1. [135]
    M. Holler, S. Tam, H. Castro, and R. Benson. An electrically trainable artificial neural network (etann) with 10240 floating gate synapses. In Proc. Int. Joint Conf. Neural Networks, pages 191–196, Washington DC, 1989.Google Scholar
  2. [136]
    S. Satyanarayana, Y. Tsividis, and H. P. Graf. A reconfigurable analog VLSI neural network chip. In Advances in Neural Information Processing Systems, volume 2, pages 758–768. Morgan Kaufman, San Mateo, CA, 1990.Google Scholar
  3. [137]
    E. Sackinger, B. E. Boser, and L. D. Jackel. A neurocomputer board based on the anna neural network chip. In Advances in Neural Information Processing Systems, volume 4, pages 773–780. Morgan Kaufman, San Mateo, CA, 1992.Google Scholar
  4. [138]
    J. A. Lansner. An experimental hardware neural-network using a cascadable, analog chipset. Int J Elect, 78(4):679–690, April 1995.CrossRefGoogle Scholar
  5. [139]
    J. O. Klein, H. Pujol, and P. Garda. Chip-in-the-loop learning algorithm for boltzmann machine. Electronics Letters, 31(12):986–988, June 1995.CrossRefGoogle Scholar

Digital Implementations

  1. [140]
    A. Johannet, L. Personnaz, G. Dreyfus, J. D. Gascuel, and M. Weinfeld. Specification and implementation of a digital hopfield-type associative memory with on-chip training. IEEE T. Neural Networks, 3(4):529–539, July 1992.CrossRefGoogle Scholar
  2. [141]
    T. Shima, T. Kimura, Y. Kamatani, T. Itakura, Y. Fujita, and T. Iida. Neuro chips with on-chip back-propagation and/or hebbian learning. IEEE J. of Solid-State Circuits, 27(12):1868–1875, 1992.CrossRefGoogle Scholar
  3. [142]
    M. Yasunaga, N. Masuda, M. Yagyu, M. Asai, and K. Shibata et al. A self-learning digital neural network using wafer-scale lsi. IEEE J. Solid-State Circuits, 28(2):106–114, February 1993.CrossRefGoogle Scholar
  4. [143]
    C. Lehmann, M. Viredaz, and F. Blayo. A generic systolic array building-block for neural networks with on-chip learning. IEEE T. Neural Networks, 4(3):400–407, May 1993.CrossRefGoogle Scholar
  5. [144]
    M. Fujita, Y. Kobayashi, K. Shiozawa, T. Takahashi, and F. Mizuno et al. Development and fabrication of digital neural-network wsis. IEICE T. Electronics, E76C(7):1182–1190, July 1993.Google Scholar
  6. [145]
    P. Murtagh, A. C. Tsoi, and N. Bergmann. Bit-serial systolic array implementation of a multilayer perceptron. In IEEE Proc E, volume 140-5, pages 277–288, September 1993.Google Scholar
  7. [146]
    T. Morishita and I. Teramoto. Neural-network multiprocessors applied with dynamically reconfigurable pipeline architecture. IEICE T. Electronics, E77C(12):1937–1943, December 1994.Google Scholar
  8. [147]
    Z. Tang and O. Ishizuka. Design and implementations of a learning t-model neural-network. IEICE T. Fund. El. Comm. Comp. Sci., E78A(2):259–263, February 1995.Google Scholar
  9. [148]
    M. P. Perrone and L. N. Cooper. The ni1000: High speed parallel VLSI for implementing multilayer perceptrons. In Advances in Neural Information Processing Systems, volume 7, pages 747–754. Morgan Kaufman, San Mateo, CA, 1995.Google Scholar
  10. [149]
    et al. J. Wawrzynek. SPERT-II: A vector microprocessor system and its application to large problems in backpropagation training. In Advances in Neural Information Processing Systems, volume 8, pages 619–625. Morgan Kaufman, San Mateo, CA, 1996.Google Scholar
  11. [150]
    S. Rehfuss and D. Hammerstrom. Model matching and sfmd computation. In Advances in Neural Information Processing Systems, volume 8, pages 713–719. Morgan Kaufman, San Mateo, CA, 1996.Google Scholar

Optical and Optoelectronic Implementations

  1. [151]
    J. Ohta, Y. Nitta, and K. Kyuma. Dynamic optical neurochip using variable-sensitivity photodiodes. Optics Lett, 16(10):744–746, 1991.Google Scholar
  2. [152]
    D.Z. Anderson, C. Benkert, V. Hebler, J.-S. Jang, D, Montgomery, and M. Saffman. Optical implementation of a self-organizing feature extractor. In Advances in Neural Information Processing Systems, volume 4, pages 821–828. Morgan Kaufman, San Mateo, CA, 1992.Google Scholar
  3. [153]
    Y. Nitta, J. Ohta, S. Tai, and K. Kyuma. Optical learning neurochip with internal analog memory. Appl Optics, 32(8):1264–1274, March 1993.Google Scholar
  4. [154]
    K. Wagner and T. M. Slagle. Optical competitive learning with VLSI liquid-crystal winner-take-all modulators. Appl Optics, 32(8):1408–1435, March 1993.Google Scholar
  5. [155]
    M. Oita, Y. Nitta, S. Tai, and K. Kyuma. Optical associative memory using optoelectronic neurochips for image-processing. IEICE T. Electronics, E77C(1):56–62, January 1994.Google Scholar
  6. [156]
    E. Lange, Y. Nitta, and K. Kyuma. Optical neural chips. IEEE Micro, 14(6):29–41, December 1994.CrossRefGoogle Scholar
  7. [157]
    A. J. Waddie and J. F. Snowdon. A smart-pixel optical neural-network design using customized error propagation. Inst. Phys. Conf. Series, 139:511–514, 1995.Google Scholar
  8. [158]
    K. Tsuji, H. Yonezu, K. Hosono, K. Shibao, and N. Ohshima et al. An optical adaptive device and its application to a competitive learning circuit. In Japan JAP 1, volume 34-2B, pages 1056–1060, February 1995.Google Scholar
  9. [159]
    W. E. Foor and M. A. Neifeld. Adaptive, optical, radial basis function neural-network for handwritten digit recognition. Appl Optics, 34(32):7545–7555, November 1995.CrossRefGoogle Scholar

Architectural Novelties

  1. [160]
    J. Alspector, J. W. Gannett, S. Haber, M. B. Parker, and R. Chu. A VLSI-efficient technique for generating multiple uncorrelated noise sources and its application to stochastic neural networks. IEEE T. Circ. Syst., 38(1):109–123, 1991.CrossRefGoogle Scholar
  2. [161]
    P. A. Shoemaker, M. J. Carlin, and R. L. Shimabukuro. Back propagation learning with trinary quantization of weight updates. Neural Networks, 4(2):231–241, 1991.CrossRefGoogle Scholar
  3. [162]
    Y. H. Pao and W. Hafez. Analog computational models of concept-formation. Int. J. Analog Integ. Circ. Signal Proc., 4(2):265–272, November 1992.Google Scholar
  4. [163]
    T. Morie and Y. Amemiya. Deterministic boltzmann machine learning improved for analog lsi implementation. IEICE T. Electronics, E76C(7):1167–1173, July 1993.Google Scholar
  5. [164]
    S. P. Deweerth and D. M. Wilson. Fixed-ratio adaptive thresholding using CMOS circuits. Electronics Letters, 31(10):788–789, May 1995.CrossRefGoogle Scholar
  6. [165]
    M. Vandaalen, J. Zhao, and J. Shawetaylor. ‘real-time output derivatives for on chip learning using digital stochastic bit stream neurons. Electronics Letters, 30(21):1775–1777, October 1994.CrossRefGoogle Scholar
  7. [166]
    V. Petridis and K. Paraschidis. On the properties of the feedforward method — a simple training law for on-chip learning. IEEE T. Neural Networks, 6(6):1536–1541, November 1995.CrossRefGoogle Scholar
  8. [167]
    H. Singh, H. S. Bawa, and L. Anneberg. Boolean neural-network realization of an adder subtractor cell. Microel Rel, 36(3):367–369, March 1996.CrossRefGoogle Scholar
  9. [168]
    T. Lehmann, E. Bruun, and C. Dietrich. Mixed analog-digital matrix-vector multiplier for neural-network synapses. Int. J. Analog Integ. Circ. Signal Proc., 9(1):55–63, January 1996.CrossRefGoogle Scholar
  10. [169]
    T. Serrano-Gotarredona and B. Linares-Barranco. A modified art-1 algorithm more suitable for VLSI implementations. Neural Networks, 9(6):1025–1043, August 1996.CrossRefGoogle Scholar
  11. [170]
    M. L. Marchesi, F. Piazza, and A. Uncini. Backpropagation without multiplier for multilayer neural networks. In IEEE P. Circ., volume 143-4, pages 229–232, August 1996.CrossRefGoogle Scholar

Systems Applications of Learning General Purpose Neural Emulators

  1. [171]
    P. Mueller, J. Van der Spiegel, D. Blackman, T. Chiu, T. Clare, C. Donham, T.P. Hsieh, and M. Lionaz. Design and fabrication of VLSI components for a general purpose analog neural computer. In Analog VLSI Implementation of Neural Systems, pages 135–169. Kluwer, Norwell, MA, 1989.Google Scholar

Blind Signal Processing

  1. [172]
    E. Vittoz and X. Arreguit. CMOS integration of herault-jutten cells for separation of sources. In Analog VLSI Implementation of Neural Systems, pages 57–83. Kluwer, Norwell, MA, 1989.Google Scholar
  2. [173]
    M. H. Cohen and A. G. Andreou. Current-node subthreshold MOS implementation of the herault-jutten autoadaptive network. IEEE J. of Solid State Circuits, 27:714–727, 1992.CrossRefGoogle Scholar
  3. [174]
    R. P. Mackey, J. J. Rodriguez, J. D. Carothers, and S. B. K. Vrudhula. Asynchronous VLSI architecture for adaptive echo cancellation. Electronics Letters, 32(8):710–711, April 1996.CrossRefGoogle Scholar

Biomedical Adaptive Signal Processing

  1. [175]
    R. Coggins, M. Jabri, M. Flower, and S. Pickard. Iceg morphology classification using an analogue VLSI neural network. In Advances in Neural Information Processing Systems, volume 7, pages 731–738. Morgan Kaufman, San Mateo, CA, 1995.Google Scholar

Speech Research

  1. [176]
    et al. J. Wawrzynek. SPERT-II: A vector microprocessor system and its application to large problems in backpropagation training. In Advances in Neural Information Processing Systems, volume 8, pages 619–625. Morgan Kaufman, San Mateo, CA, 1996.Google Scholar
  2. [177]
    John Lazzaro. Temporal adaptation in a silicon auditory nerve. In John E. Moody, Steve J. Hanson, and Richard P. Lippmann, editors, Advances in Neural Information Processing Systems, volume 4, pages 813–820. Morgan Kaufmann Publishers, Inc., 1992.Google Scholar

Olfactory Sensory Processing

  1. [178]
    P. A. Shoemaker, C. G. Hutchens, and S. B. Patil. A hierarchical-clustering network based on a model of olfactory processing. Int. J. Analog Integ. Circ. Signal Proc., 2(4):297–311, November 1992.Google Scholar

Focal-Plane Sensors and Adaptive Vision Systems

  1. [179]
    J. Tanner and C. A. Mead. An integrated analog optical motion sensor. In S. Y. Kung, editor, VLSI Signal Processing II, pages 59–76. IEEE Press, New York, 1986.Google Scholar
  2. [180]
    C. A. Mead. Adaptive retina. In C. Mead and M. Ismail, editors, Analog VLSI Implementation of Neural Systems, pages 239–246. Kluwer Academic Pub., Norwell, MA, 1989.Google Scholar
  3. [181]
    M. Mahowald. An Analog VLSI Stereoscopic Vision System. Kluwer Academic, Boston, MA, 1994.Google Scholar
  4. [182]
    T. Delbrück. Silicon retina with correlation-based velocity-tuned pixels. IEEE Transactions on Neural Networks, 4(3):529–541, May 1993.CrossRefGoogle Scholar
  5. [183]
    J. C. Lee, B. J. Sheu, and W. C. Fang. VLSI neuroprocessors for video motion detection. IEEE Transactions on Neural Networks, 4(2):78–191, 1993.CrossRefGoogle Scholar
  6. [184]
    R. Etienne-Cummings, J. Van der Spiegel, and P. Mueller. VLSI model of primate visual smooth pursuit. In Advances in Neural Information Processing Systems, volume 8, pages 707–712. Morgan Kaufman, San Mateo, CA, 1996.Google Scholar
  7. [185]
    R. Sarpeshkar, J. Kramer, G. Indiveri, and C. Koch. Analog VLSI architectures for motion processing — from fundamental limits to system applications. P IEEE, 84(7):969–987, July 1996.CrossRefGoogle Scholar
  8. [186]
    K. A. Boahen. A retinomorphic vision system. IEEE Micro, 16(5):30–39, October 1996.CrossRefGoogle Scholar
  9. [187]
    S. C. Liu and C. Mead. Continuous-time adaptive delay system. IEEE T. Circ. Syst. II, 43(11):744–751, November 1996.CrossRefGoogle Scholar

Optical Character Recognition

  1. [188]
    B. Y. Chen, M. W. Mao, and J. B. Kuo. Coded block neural network VLSI system using an adaptive learning-rate technique to train chinese character patterns. Electronics Letters, 28(21):1941–1942, October 1992.CrossRefGoogle Scholar
  2. [189]
    C. S. Miou, T. M. Shieh, G. H. Chang, B. S. Chien, and M. W. Chang et al. Optical chinese character-recognition system using a new pipelined matching and sorting VLSI. Opt Eng, 32(7):1623–1632, July 1993.CrossRefGoogle Scholar
  3. [190]
    S. Maruno, T. Kohda, H. Nakahira, S. Sakiyama, and M. Maruyama. Quantizer neuron model and neuroprocessor-named quantizer neuron chip. IEEE J. Sel. Areas Comm., 12(9):1503–1509, December 1994.CrossRefGoogle Scholar

Image Compression

  1. [191]
    W. C. Fang, B. J. Sheu, O. T. C. Chen, and J. Choi. A VLSI neural processor for image data-compression using self-organization networks. IEEE Transactions on Neural Networks, 3(3):506–518, 1992.CrossRefGoogle Scholar

Communications and Decoding

  1. [192]
    J. G. Choi, S. H. Bang, and B. J. Sheu. A programmable analog VLSI neural-network processor for communication receivers. IEEE T. Neural Networks, 4(3):484–495, May 1993.CrossRefGoogle Scholar
  2. [193]
    M. I. Chan, W. T. Lee, M. C. Lin, and L. G. Chen. Ic design of an adaptive viterbi decoder. IEEE T. Cons. El., 42(1):52–62, February 1996.CrossRefGoogle Scholar
  3. [194]
    R. Mittal, K. C. Bracken, L. R. Carley, and D. J. Allstot. A low-power backward equalizer for dfe read-channel applications. IEEE J. Solid-State Circuits, 32(2):270–273, February 1997.CrossRefGoogle Scholar
  4. [195]
    B. C. Rothenberg, J. E. C. Brown, P. J. Hurst, and S. H. Lewis. A mixed-signal ram decision-feedback equalizer for disk drives. IEEE J. Solid-State Circuits, 32(5):713–721, 1997.CrossRefGoogle Scholar

Clock Skew Timing Control

  1. [196]
    W. D. Grover, J. Brown, T. Friesen, and S. Marsh. All-digital multipoint adaptive delay compensation circuit for low skew clock distribution. Electronics Letters, 31(23):1996–1998, November 1995.CrossRefGoogle Scholar
  2. [197]
    M. Mizuno, M. Yamashina, K. Furuta, H. Igura, and H. Abiko et al. A GHz MOS adaptive pipeline technique using MOS current-mode logic. IEEE J. Solid-State Circuits, 31(6):784–791, June 1996.CrossRefGoogle Scholar
  3. [198]
    E. W. Justh and F. J. Kub. Analog CMOS continuous-time tapped delay-line circuit. Electronics Letters, 31(21):1793–1794, October 1995.CrossRefGoogle Scholar

Control and Autonomous Systems

  1. [199]
    Y. Harata, N. Ohta, K. Hayakawa, T. Shigematsu, and Y. Kita. A fuzzy inference lsi for an automotive control. IEICE T. Electronics, E76C(12):1780–1781, December 1993.Google Scholar
  2. [200]
    G. Jackson and A. F. Murray. Competence acquisition is an autonomous mobile robot using hardware neural techniques. In Adv. Neural Information Processing Systems, volume 8, pages 1031–1037. MIT Press, Cambridge, MA, 1996.Google Scholar

High-Energy Physics

  1. [201]
    T. Lindblad, C. S. Lindsey, F. Block, and A. Jayakumar. Using software and hardware neural networks in a higgs search. Nucl Inst A, 356(2–3):498–506, March 1995.CrossRefGoogle Scholar
  1. [202]
    C. S. Lindsey, T. Lindblad, G. Sekhniaidze, G. Szkely, and M. Minerskjold. Experience with the ibm zisc036 neural-network chip. Int J. Modern Phys. C, 6(4):579–584, August 1995.CrossRefGoogle Scholar
  2. [203]
    G. Anzellotti, R. Battiti, I. Lazzizzera, G. Soncini, and A. Zorat et al. Totem — a highly parallel chip for triggering applications with inductive learning based on the reactive tabu search. Int J. Modern Phys. C, 6(4):555–560, August 1995.CrossRefGoogle Scholar

Copyright information

© Kluwer Academic Publishers 1998

Authors and Affiliations

  • Gert Cauwenberghs

There are no affiliations available

Personalised recommendations