Analog VLSI Stochastic Perturbative Learning Architectures

  • Gert Cauwenberghs
Part of the The Springer International Series in Engineering and Computer Science book series (SECS, volume 447)


Learning and adaptation are central to the design of neuromorphic VLSI systems that perform robustly in variable and unpredictable environments.


Reinforcement Learning Gradient Descent Supervise Learning Stochastic Approximation Charge Pump 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    J. Alspector, R. Meir, B. Yuhas, and A. Jayakumar. A parallel gradient descent method for learning in analog VLSI neural networks. In Advances in Neural Information Processing Systems, volume 5, pages 836–844, San Mateo, CA, 1993. Morgan Kaufman.Google Scholar
  2. [2]
    A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, 13(5):834–846, 1983.Google Scholar
  3. [3]
    J. C. Candy and G. C. Temes. Oversampled methods for A/D and D/A conversion. In Oversampled Delta-Sigma Data Converters, pages 1–29. IEEE Press, 1992.Google Scholar
  4. [4]
    G. Cauwenberghs. A fast stochastic error-descent algorithm for supervised learning and optimization. In Advances in Neural Information Processing Systems, volume 5, pages 244–251, San Mateo, CA, 1993. Morgan Kaufman.Google Scholar
  5. [5]
    G. Cauwenberghs. A learning analog neural network chip with continuous-recurrent dynamics. In Advances in Neural Information Processing Systems, volume 6, pages 858–865, San Mateo, CA, 1994. Morgan Kaufman.Google Scholar
  6. [6]
    G. Cauwenberghs. A micropower CMOS algorithmic A/D/A converter. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 42(11):913–919, 1995.CrossRefGoogle Scholar
  7. [7]
    G. Cauwenberghs. Analog VLSI long-term dynamic storage. In Proceedings of the IEEE International Symposium on Circuits and Systems, Atlanta, GA, 1996.Google Scholar
  8. [8]
    G. Cauwenberghs. An analog VLSI recurrent neural network learning a continuous-time trajectory. IEEE Transactions on Neural Networks, 7(2), March 1996.Google Scholar
  9. [9]
    G. Cauwenberghs. Reinforcement learning in a nonlinear noise shaping oversampled A/D converter. In Proc. Int. Symp. Circuits and Systems, Hong Kong, June 1997.Google Scholar
  10. [10]
    G. Cauwenberghs and A. Yariv. Fault-tolerant dynamic multi-level storage in analog VLSI. IEEE Transactions on Circuits and Systems II, 41(12):827–829, 1994.CrossRefGoogle Scholar
  11. [11]
    P. Churchland and T. Sejnowski. The Computational Brain. MIT Press, 1993.Google Scholar
  12. [12]
    A. Dembo and T. Kailath. Model-free distributed learning. IEEE Transactions on Neural Networks, 1(1):58–70, 1990.CrossRefGoogle Scholar
  13. [13]
    C. Diorio, P. Hassler, B. Minch, and C. A. Mead. ‘a single-transistor silicon synapse. To appear in IEEE Transactions on Electron Devices.Google Scholar
  14. [14]
    B. Flower and M. Jabri. Summed weight neuron perturbation: An ≀(n) improvement over weight perturbation. In Advances in Neural Information Processing Systems, volume 5, pages 212–219, San Mateo, CA, 1993. Morgan Kaufman.Google Scholar
  15. [15]
    S. Grossberg. A neural model of attention, reinforcement, and discrimination learning. International Review of Neurobiology, 18:263–327, 1975.CrossRefGoogle Scholar
  16. [16]
    S. Grossberg and D. S. Levine. Neural dynamics of attentionally modulated pavlovian conditioning: Blocking, inter-stimulus interval, and secondary reinforcement. Applied Optics, 26:5015–5030, 1987.CrossRefGoogle Scholar
  17. [17]
    R. D. Hawkins, T. W. Abrams, T. J. Carew, and E. R. Kandell. A cellular mechanism of classical conditioning in aplysia: Activity-dependent amplification of presynaptic facilitation. Science, 219:400–405, 1983.CrossRefGoogle Scholar
  18. [18]
    M. Jabri and B. Flower. Weight perturbation: An optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayered networks. IEEE Transactions on Neural Networks, 3(1):154–157, 1992.CrossRefGoogle Scholar
  19. [19]
    S. R. Kelso and T. H. Brown. Differential conditioning of associative synaptic enhancement in hippocampal brain slices. Science, 232:85–87, 1986.CrossRefGoogle Scholar
  20. [20]
    D. Kirk, D. Kerns, K. Fleischer, and A. Barr. Analog VLSI implementation of gradient descent. In Advances in Neural Information Processing Systems, volume 5, pages 789–796, San Mateo, CA, 1993. Morgan Kaufman.Google Scholar
  21. [21]
    H. J. Kushner and D. S. Clark. Stochastic Approximation Methods for Constrained and Unconstrained Systems. Springer-Verlag, New York, NY, 1978.Google Scholar
  22. [22]
    P. R. Montague, P. Dayan, C. Person, and T. J. Sejnowski. Bee foraging in uncertain environments using predictive hebbian learning. Nature, 377(6551):725–728, 1996.CrossRefGoogle Scholar
  23. [23]
    F. Pineda. Mean-field theory for batched-td(λ). In Neural Computation, 1996.Google Scholar
  24. [24]
    H. Robins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22:400–407, 1951.MathSciNetGoogle Scholar
  25. [25]
    G. M. Shepherd. The Synaptic Organization of the Brain. Oxford Univ. Press, New York, 3 edition, 1992.Google Scholar
  26. [26]
    J. C. Spall. A stochastic approximation technique for generating maximum likelihood parameter estimates. In Proceedings of the 1987 American Control Conference, Minneapolis, 1987.Google Scholar
  27. [27]
    M. A. Styblinski and T.-S. Tang. Experiments in nonconvex optimization: Stochastic approximation with function smoothing and simulated annealing. Neural Networks, 3(4):467–483, 1990.CrossRefGoogle Scholar
  28. [28]
    R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988.Google Scholar
  29. [29]
    C. Watkins and P. Dayan. Q-learning. Machine Learning, 8:279–292, 1992.MATHGoogle Scholar
  30. [30]
    P. Werbos. Beyond regression: New tools for prediction and analysis in the behavioral sciences. In The Roots of Backpropagation. Wiley, New York, 1993.Google Scholar
  31. [31]
    P. J. Werbos. A menu of designs for reinforcement learning over time. In W. T. Miller, R. S. Sutton, and P. J. Werbos, editors, Neural Networks for Control, pages 67–95. MIT Press, Cambridge, MA, 1990.Google Scholar

Copyright information

© Kluwer Academic Publishers 1998

Authors and Affiliations

  • Gert Cauwenberghs

There are no affiliations available

Personalised recommendations