Skip to main content

Using on-line arithmetic and reconfiguration for neuroprocessor implementation

  • Artificial Neural Nets Simulation and Implementation
  • Conference paper
  • First Online:
Engineering Applications of Bio-Inspired Artificial Neural Networks (IWANN 1999)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1607))

Included in the following conference series:

  • 158 Accesses

Abstract

Artificial neural networks can solve complex problems such as time series prediction, handwritten pattern recognition or speech processing. Though software simulations are essential when one sets about to study a new algorithm, they cannot always fulfill real-time criteria required by some practical applications. Consequently, hardware implementations are of crucial import.

The appearance of fast reconfigurable FPGA circuits brings about new paths for the design of neuroprocessors. A learning algorithm is divided into different steps that are associated with specific FPGA configurations. The training process then consists of alternating computing and reconfiguration stages. Such a method leads to an optimal use of hard-ware resources.

This paradigm is applied to the design of a neuroprocessor implementing multilayer perceptrons with on-chip training and pruning. All arithmetic operations are carried out with on-line operators. We also describe the principles of the hardware architecture, focusing in particular on the pruning mechanisms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. B. Widrow and M. A. Lehr. 30 Years of Adaptative Neural Networks: Perceptron, Madaline and Backpropagation. Proc. IEEE, 78(9):1415–1442, September 1990.

    Article  Google Scholar 

  2. Russell Reed. Pruning Algorithms—A Survey. IEEE Transactions on Neural Networks, 4(5):740–747, September 1993.

    Article  Google Scholar 

  3. Masumi Ishikawa. Structural Learning with Forgetting. Neural Networks, 9(3):509–521, 1996.

    Article  Google Scholar 

  4. J. Hérault and C. Jutten. Réseaux neuronaux et traitement du signal. Hermès, 1994.

    Google Scholar 

  5. C. Mead. Analog VLSI and Neural Systems. Addison-Wesley, May 1989.

    Google Scholar 

  6. Shigeo Sakaue, Toshiyuki Kodha, Hiroshi Yamamoto, Susumu Maruno, and Yasuharu Shimeki. Reduction of Required Precision Bits for Back-Propagation Applied to Pattern Recognition. IEEE Transactions on Neural Networks, 4(2):270–275, March 1993.

    Article  Google Scholar 

  7. Paolo Ienne. Digital Connectionist Hardware: Current Problems and Future Challenges. In José Mira, Roberto Moreno-Díaz, and Joan Cabestany, editors, Biological and Artificial Computation: From Neuroscience to Technology, pages 688–713. Springer, 1997.

    Google Scholar 

  8. Ulrich Rückert and Ulf Witkowski. Silicon Artificial Neural Networks. In L. Niklasson, M. Bodén, and T. Ziemke, editors, ICANN 98, Perspectives in Neural Computing, pages 75–84. Springer, 1998.

    Google Scholar 

  9. Fadi N. Sibai and Sunil D. Kulkarni. A Time-Multiplexed Reconfigurable Neuroprocessor. IEEE Micro, 17(1):58–65, 1997.

    Article  Google Scholar 

  10. B. Keith Jenkins and Jr. Armand R. Tanguay. Optical Architectures for Neural Network Implementations. In Michael A. Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 673–677. The MIT Press, 1995.

    Google Scholar 

  11. Jean-Luc Beuchat and Eduardo Sanchez. A Reconfigurable Neuroprocessor with On-chip Pruning. In L. Niklasson, M. Bodén, and T. Ziemke, editors, ICANN 98, Perspectives in Neural Computing, pages 1159–1164. Springer, 1998.

    Google Scholar 

  12. Kishor S. Trivedi and Milos D. Ercegovac. On-line Algorithms for Division and Multiplication. IEEE Transaction on Computers, C-26(7), July 1977.

    Google Scholar 

    Google Scholar 

  13. Algirdas Avizienis. Signed-Digit Number Representations for Fast Parallel Arithmetic. IRE Transactions on Electronic Computers, 10, 1961.

    Google Scholar 

  14. Jean-Claude Bajard, Jean Duprat, Sylvanus Kla, and Jean-Michel Muller. Some Operators for On-Line Radix-2 Computations. Journal of Parallel and Distributed Computing, 22:336–345, 1994.

    Article  MATH  Google Scholar 

  15. Eduardo Sanchez, Moshe Sipper, Jacques-Olivier Haenni, Jean-Luc Beuchat, André Stauffer, and Andrés Perez-Uribe. Static and Dynamic Configurable Systems. To appear.

    Google Scholar 

  16. Stephen M. Scalera, John J. Murray, and Steve Lease. A Mathematical Benefit Analysis of Context Switching Reconfigurable Computing. In José Rolim, editor, Parallel and Distributed Processing, number 1388 in Lecture Notes in Computer Science, pages 73–78. Springer, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Juan V. Sánchez-Andrés

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Beuchat, JL., Sanchez, E. (1999). Using on-line arithmetic and reconfiguration for neuroprocessor implementation. In: Mira, J., Sánchez-Andrés, J.V. (eds) Engineering Applications of Bio-Inspired Artificial Neural Networks. IWANN 1999. Lecture Notes in Computer Science, vol 1607. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0100479

Download citation

  • DOI: https://doi.org/10.1007/BFb0100479

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66068-2

  • Online ISBN: 978-3-540-48772-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics