Digital VLSI Neuroprocessors
In this chapter, various design examples of digital VLSI neuroprocessors are described. The information in Connected Network of Adaptive Processors (CNAPS) from Adaptive Solutions Inc. provides an entire design process of a general-purpose neuroprocessor from the processing element design to the high-level architecture. The MA-16 chip and SYNAPSE-X system from Siemens Corporation are based on a detailed architecture of the systolic computation. The Ni10000 chip is based on a dedicated-learning architecture, the radial basis function (RBF) network. In addition, other types of digital VLSI neural network implementations are presented with special emphasis on their unique characteristics.
KeywordsRadial Basis Function Systolic Array Register File Elementary Chain Systolic Architecture
Unable to display preview. Download preview PDF.
- M. Griffin, G. Tahara, K. Knorpp, R. Pinkham, and B. Riley, “An 11-million transistor neural network execution engine,” Tech. Digest IEEE Inter. Solid-State Circuits Conference, pp. 180–181, San Francisco, CA, Feb. 1991.Google Scholar
- U. Ramacher, J. Beichter, and N. Briils, “Architecture of a general-purpose neural signal processor,” Proc. IEEE/INNS Inter. Joint Conf. Neural Networks, vol. 1, pp. 443–446, Seattle, WA, July 1991.Google Scholar
- R. L. Lippmann, “A critical overview of neural network pattern classifier,” Proc. IEEE Neural Networks for Signal Processing Workshop, pp. 266–275, Princeton, NJ, 1991.Google Scholar
- D. A. Orrey, D. J. Myers, and J. M. Vincent, “A high performance digital processor for implementing large artificial neural networks,” Proc. IEEE Custom Integrated Circuits Conference, pp. 16.3.1–16.3.4, San Diego, CA, May 1991.Google Scholar
- K. Uchimura, O. Saito, and Y. Amemiya, “An 8G connections-per-second 54mW digital neural network chip with low-power chain-reaction architecture,” Tech. Digest IEEE Inter. Solid-State Circuits Conference, pp. 134–135, San Francisco, CA, Feb. 1992.Google Scholar
- A. Masaki, Y. Hirai, and M. Yamada, “Neural networks in CMOS: A case study,” IEEE Circuits and Devices Magazine, pp. 12–17, July 1990.Google Scholar
- H. Katao, H. Yoshizawa, H. Iciki, and K. Asakawa, “A parallel neurocomputer architecture towards billion connection updates per second,” Proc. IEEE/INNS Inter. Joint Conf. Neural Networks, vol. 2, pp. 47–50, 1989.Google Scholar
- E. Sánchez-Sinencio, C. Lau (Eds.), Artificial Neural Networks: Paradigms, Applications, and Hardware Implementations, IEEE Press: Piscataway, NJ, 1992.Google Scholar
- H. J. Siegel, Interconnection Networks for Large-Scale Parallel Processing. McGraw-Hill Publishing Company, Inc.: New York, NY, 1990.Google Scholar
- H. Kato, H. Yoshizawa, H. Iciki, and K. Asakawa, “A parallel neurocomputer architecture towards billion connection updates per second,” in Proc. IEEE/INNS Int’l Joint Conf. Neural Networks, vol. II, pp. 47–50, San Diego, CA, Jun. 1990.Google Scholar
- K. W. Przytula, W.-M. Lin, and V. K. P. Kumar, “Partitioned implementation of neural networks on mesh connected array processors,” VLSI Signal Processing IV, IEEE Press: New York, NY, pp. 106–115, Aug. 1991.Google Scholar
- Josephine C.-F. Chang, “An Efficient Digital VLSI Neural Processor Design for Image Processing,” Ph.D. Dissertation, Signal and Image Proc. Institute Report, University of Southern California, 1994.Google Scholar