Advertisement

Minimising Contrastive Divergence with Dynamic Current Mirrors

  • Chih-Cheng Lu
  • H. Chen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5768)

Abstract

Implementing probabilistic models in Very-Large-Scale-Integration (VLSI) has been attractive to implantable biomedical devices for improving sensor fusion. However, hardware non-idealities can introduce training errors, hindering optimal modelling through on-chip adaptation. This paper investigates the feasibility of using the dynamic current mirrors to implement a simple and precise training circuit. The precision required for training the Continuous Restricted Boltzmann Machine (CRBM) is first identified. A training circuit based on accumulators formed by dynamic current mirrors is then proposed. By measuring the accumulators in VLSI, the feasibility of training the CRBM on chip according to its minimizing-contrastive-divergence rule is concluded.

Keywords

Minimising Contrastive Divergence Dynamic Current Mirrors Probabilistic Model Boltzmann Machine On-chip training 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Schwartz, A.B.: Cortical Neural Prothetics. Annual Review Neuroscience, 487–507 (2004)Google Scholar
  2. 2.
    Lebedev, M.A., Nicolelis, M.A.L.: Brain-machine interfaces: past, present and future. TRENDS in Neuroscience 29[9], 536–546 (2006)CrossRefGoogle Scholar
  3. 3.
    Chen, H., Fleury, P., Murray: Continuous-Valued Probabilistic Behaviour in a VLSI Generative Model. IEEE Trans. on Neural Networks 17(3), 755–770 (2006)CrossRefGoogle Scholar
  4. 4.
    Genov, R., Cauwenberghs, G.: Kerneltron: support vector machine in silicon. IEEE Trans. on Neural Networks 14(8), 1426–1433 (2003)CrossRefzbMATHGoogle Scholar
  5. 5.
    Hsu, D., Bridges, S., Figueroa, M., Diorio, C.: Adaptive Quantization and Density Estimation in Silicon. In: Advances in Neural Information Processing Systems (2002)Google Scholar
  6. 6.
    Hinton, G.E.: Training Products of Experts by Minimizing Contrastive Divergence. Neural Computation 14(8), 1771–1800 (2002)CrossRefzbMATHGoogle Scholar
  7. 7.
    Chen, H., Murray, A.F.: A Continuous Restricted Boltzmann Machine with an Implementable Training Algorithm. IEE Proc. of Vision, Image and Signal Processing 150(3), 153–158 (2003)CrossRefGoogle Scholar
  8. 8.
    Hinton, G.E., Sejnowski, T.J.: Learning and Relearning in Boltzmann Machine. In: Rumelhart, D., McClelland, J.L., The PDP Research Group (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, pp. 283–317. MIT, Cambridge (1986)Google Scholar
  9. 9.
    MIT-BIH Database Distribution, http://ecg.mit.edu/index.htm
  10. 10.
    Chen, H., Fleury, P., Murray, A.F.: Minimizing Contrastive Divergence in Noisy, Mixed-mode VLSI Neurons. In: Advances in Neural Information Processing Systems, vol. 16 (2004)Google Scholar
  11. 11.
    Chiang, P.C., Chen, H.: Training Probabilistic VLSI models On-chip to Recognise Biomedical Signals under Hardware Nonidealities. In: IEEE International Conf. of Engineering in Medicine and Biology Society (2006)Google Scholar
  12. 12.
    Wegmann, G., Vittoz, E.: Analysis and Improvements of Accurate Dynamic Current Mirrors. IEEE J. of Solid-State Circuits 25[3], 699–706 (1990)CrossRefGoogle Scholar
  13. 13.
    Fleury, P., Chen, H., Murray, A.F.: On-chip Contrastive Divergence Learning in Analogue VLSI. In: Proc. of the International Joint Conference on Neural Networks (2004)Google Scholar
  14. 14.
    Teh, Y.W., Hinton, G.E.: Rate-coded Restricted Boltzmann Machine for Face Recognition. In: Advances in Neural Information Processing System. MIT Press, Cambridge (2001)Google Scholar
  15. 15.
    Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice Hall, Englewood Cliffs (1998)zbMATHGoogle Scholar
  16. 16.
    Peterson, C., Anderson, J.R.: A Mean Field Theory Learning Algorithm for Neural Networks. Complex Systems 1, 995–1019 (1987)zbMATHGoogle Scholar
  17. 17.
    Murray, A.F.: Analogue Noise-enhanced Learning in Neural Network Circuits. Electronics Letters 27(17), 1546–1548 (1991)CrossRefGoogle Scholar
  18. 18.
    Murray, A.F., Edwards, P.J.: Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Trans. on Neural Networks 5(5), 792–802 (1994)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Chih-Cheng Lu
    • 1
  • H. Chen
    • 1
  1. 1.The Dept. of Electrical EngineeringThe National Tsing-Hua UniversityHsin-ChuTaiwan

Personalised recommendations