Skip to main content

Analog VLSI Stochastic Perturbative Learning Architectures

  • Chapter
Neuromorphic Systems Engineering

Part of the book series: The Springer International Series in Engineering and Computer Science ((SECS,volume 447))

Abstract

Learning and adaptation are central to the design of neuromorphic VLSI systems that perform robustly in variable and unpredictable environments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. Alspector, R. Meir, B. Yuhas, and A. Jayakumar. A parallel gradient descent method for learning in analog VLSI neural networks. In Advances in Neural Information Processing Systems, volume 5, pages 836–844, San Mateo, CA, 1993. Morgan Kaufman.

    Google Scholar 

  2. A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, 13(5):834–846, 1983.

    Google Scholar 

  3. J. C. Candy and G. C. Temes. Oversampled methods for A/D and D/A conversion. In Oversampled Delta-Sigma Data Converters, pages 1–29. IEEE Press, 1992.

    Google Scholar 

  4. G. Cauwenberghs. A fast stochastic error-descent algorithm for supervised learning and optimization. In Advances in Neural Information Processing Systems, volume 5, pages 244–251, San Mateo, CA, 1993. Morgan Kaufman.

    Google Scholar 

  5. G. Cauwenberghs. A learning analog neural network chip with continuous-recurrent dynamics. In Advances in Neural Information Processing Systems, volume 6, pages 858–865, San Mateo, CA, 1994. Morgan Kaufman.

    Google Scholar 

  6. G. Cauwenberghs. A micropower CMOS algorithmic A/D/A converter. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 42(11):913–919, 1995.

    Article  Google Scholar 

  7. G. Cauwenberghs. Analog VLSI long-term dynamic storage. In Proceedings of the IEEE International Symposium on Circuits and Systems, Atlanta, GA, 1996.

    Google Scholar 

  8. G. Cauwenberghs. An analog VLSI recurrent neural network learning a continuous-time trajectory. IEEE Transactions on Neural Networks, 7(2), March 1996.

    Google Scholar 

  9. G. Cauwenberghs. Reinforcement learning in a nonlinear noise shaping oversampled A/D converter. In Proc. Int. Symp. Circuits and Systems, Hong Kong, June 1997.

    Google Scholar 

  10. G. Cauwenberghs and A. Yariv. Fault-tolerant dynamic multi-level storage in analog VLSI. IEEE Transactions on Circuits and Systems II, 41(12):827–829, 1994.

    Article  Google Scholar 

  11. P. Churchland and T. Sejnowski. The Computational Brain. MIT Press, 1993.

    Google Scholar 

  12. A. Dembo and T. Kailath. Model-free distributed learning. IEEE Transactions on Neural Networks, 1(1):58–70, 1990.

    Article  Google Scholar 

  13. C. Diorio, P. Hassler, B. Minch, and C. A. Mead. ‘a single-transistor silicon synapse. To appear in IEEE Transactions on Electron Devices.

    Google Scholar 

  14. B. Flower and M. Jabri. Summed weight neuron perturbation: An ≀(n) improvement over weight perturbation. In Advances in Neural Information Processing Systems, volume 5, pages 212–219, San Mateo, CA, 1993. Morgan Kaufman.

    Google Scholar 

  15. S. Grossberg. A neural model of attention, reinforcement, and discrimination learning. International Review of Neurobiology, 18:263–327, 1975.

    Article  Google Scholar 

  16. S. Grossberg and D. S. Levine. Neural dynamics of attentionally modulated pavlovian conditioning: Blocking, inter-stimulus interval, and secondary reinforcement. Applied Optics, 26:5015–5030, 1987.

    Article  Google Scholar 

  17. R. D. Hawkins, T. W. Abrams, T. J. Carew, and E. R. Kandell. A cellular mechanism of classical conditioning in aplysia: Activity-dependent amplification of presynaptic facilitation. Science, 219:400–405, 1983.

    Article  Google Scholar 

  18. M. Jabri and B. Flower. Weight perturbation: An optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayered networks. IEEE Transactions on Neural Networks, 3(1):154–157, 1992.

    Article  Google Scholar 

  19. S. R. Kelso and T. H. Brown. Differential conditioning of associative synaptic enhancement in hippocampal brain slices. Science, 232:85–87, 1986.

    Article  Google Scholar 

  20. D. Kirk, D. Kerns, K. Fleischer, and A. Barr. Analog VLSI implementation of gradient descent. In Advances in Neural Information Processing Systems, volume 5, pages 789–796, San Mateo, CA, 1993. Morgan Kaufman.

    Google Scholar 

  21. H. J. Kushner and D. S. Clark. Stochastic Approximation Methods for Constrained and Unconstrained Systems. Springer-Verlag, New York, NY, 1978.

    Google Scholar 

  22. P. R. Montague, P. Dayan, C. Person, and T. J. Sejnowski. Bee foraging in uncertain environments using predictive hebbian learning. Nature, 377(6551):725–728, 1996.

    Article  Google Scholar 

  23. F. Pineda. Mean-field theory for batched-td(λ). In Neural Computation, 1996.

    Google Scholar 

  24. H. Robins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22:400–407, 1951.

    MathSciNet  Google Scholar 

  25. G. M. Shepherd. The Synaptic Organization of the Brain. Oxford Univ. Press, New York, 3 edition, 1992.

    Google Scholar 

  26. J. C. Spall. A stochastic approximation technique for generating maximum likelihood parameter estimates. In Proceedings of the 1987 American Control Conference, Minneapolis, 1987.

    Google Scholar 

  27. M. A. Styblinski and T.-S. Tang. Experiments in nonconvex optimization: Stochastic approximation with function smoothing and simulated annealing. Neural Networks, 3(4):467–483, 1990.

    Article  Google Scholar 

  28. R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988.

    Google Scholar 

  29. C. Watkins and P. Dayan. Q-learning. Machine Learning, 8:279–292, 1992.

    MATH  Google Scholar 

  30. P. Werbos. Beyond regression: New tools for prediction and analysis in the behavioral sciences. In The Roots of Backpropagation. Wiley, New York, 1993.

    Google Scholar 

  31. P. J. Werbos. A menu of designs for reinforcement learning over time. In W. T. Miller, R. S. Sutton, and P. J. Werbos, editors, Neural Networks for Control, pages 67–95. MIT Press, Cambridge, MA, 1990.

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Kluwer Academic Publishers

About this chapter

Cite this chapter

Cauwenberghs, G. (1998). Analog VLSI Stochastic Perturbative Learning Architectures. In: Lande, T.S. (eds) Neuromorphic Systems Engineering. The Springer International Series in Engineering and Computer Science, vol 447. Springer, Boston, MA. https://doi.org/10.1007/978-0-585-28001-1_18

Download citation

  • DOI: https://doi.org/10.1007/978-0-585-28001-1_18

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-0-7923-8158-7

  • Online ISBN: 978-0-585-28001-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics