Skip to main content

Using FPGAs to implement a reconfigurable highly parallel computer

  • Rapid Prototyping
  • Conference paper
  • First Online:
Field-Programmable Gate Arrays: Architecture and Tools for Rapid Prototyping (FPL 1992)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 705))

Included in the following conference series:

Abstract

With the arrival of large Field Programmable Gate Arrays (FPGAs) it is possible to build an entire computer using only FPGA and memory. In this paper we share some experience from building a highly parallel computer using this concept. Even if today's FPGAs are of considerable size, each processor must be relatively simple if a highly parallel computer is to be constructed from them. Based on our experience of other parallel computers and thorough studies of the intended applications, we think it is possible to build very powerful and efficient computers using bit-serial processing elements with SIMD (Single Instruction stream, Multiple Data streams) control.

A major benefit of using FPGAs is the fact that different architectural variations can easily be tested and evaluated on real applications. In the primary application area, which is artificial neural networks, the gains of extensions like bit-serial multipliers or counters can quickly be found. A concrete implementation of a processor array, using Xilinx FPGAs, is described in this paper.

To get efficient usage and high performance with the FPGA circuits signal flow plays an important role. As the current implementation of the Xilinx EDA software does not support that design issue, the signal flow design has to be made by hand. The processing elements are simple and regular which makes it easy to implement them with the XACT Editor. This gives high performance, up to 40–50 MHz.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Åhlander, A. and B. Svensson. “Floating point calculations in bit-serial SIMD computers.” In Fourth Swedish Workshop on Computer Systems Architecture, Linköping, Sweden, 1992.

    Google Scholar 

  2. Arbib, M. A. “Schemas and neural network for sixth generation computing.” Journal of Parallel and Distributed Computing. Vol. 6(2): pp. 185–216, 1989.

    Article  Google Scholar 

  3. Bengtsson, L., A. Linde, T. Nordström, B. Svensson, M. Taveniku and A. Åhlander. “Design and implementation of the REMAP3 software reconfigurable SIMD parallel computer.” In Fourth Swedish Workshop on Computer Systems Architecture, Linköping, Sweden, 1992.

    Google Scholar 

  4. Cox, C. E. and W. E. Blanz. “GANGLION — A fast field programmable gate array implementation of a connectionist classfier.” (RJ 8290/75651/), IBM Research Division, Almaden Research Centre, 1990.

    Google Scholar 

  5. Fernström, C., I. Kruzela and B. Svensson. LUCAS Associative Array Processor — Design, Programming and Application Studies. Vol 216 of Lecture Notes in Computer Science. Springer Verlag. Berlin. 1986.

    Google Scholar 

  6. Hertz, J., A. Krogh and R. G. Palmer. Introduction to the Theory of Neural Computations. Addison Wesley. Redwood City, CA. 1991.

    Google Scholar 

  7. Kohonen, T. “An introduction to neural computing.” Neural Networks. Vol. 1: pp. 3–16, 1988.

    Article  Google Scholar 

  8. Linde, A. and M. Taveniku. “LUPUS — a reconfigurable prototype for a modular massively parallel SIMD computing system.” (Masters Thesis 1991:028 E), University of Luleå, Sweden, 1991. [In Swedish]

    Google Scholar 

  9. Lippmann, R. P. “An Introduction to Computing with Neural Nets.” IEEE Acoustics, Speech, and Signal Processing Magazine. Vol. 4(April): pp. 4–22, 1987.

    Google Scholar 

  10. Nordström, T. “Designing parallel computers for self organizing maps.” (Res. Rep. TULEA 1991:17), Luleå University of Technology, Sweden, 1991.

    Google Scholar 

  11. Nordström, T. “Sparse distributed memory simulation on REMAP3.” (Res. Rep. TULEA 1991:16), Luleå University of Technology, Sweden, 1991.

    Google Scholar 

  12. Nordström, T. and B. Syensson. “Using and designing massively parallel computers for artificial neural networks.” Journal of Parallel and Distributed Computing. Vol. 14(3): pp. 260–285, 1992.

    Article  Google Scholar 

  13. Saarinen, J., M. Lindell, P. Kotilainen, J. Tomberg, P. Kanerva and K. Kaski. “Highly parallel hardware implementation of sparse distributed memory.” In International Conference on Artificial Neural Networks, Vol. 1, pp. 673–678, Helsinki, Finland, 1991.

    Google Scholar 

  14. Schmitt, R. S. and S. S. Wilson. “The AIS-5000 parallel processor.” IEEE Transaction on Pattern Analysis and Machine Intelligence. Vol. 10(3): pp. 320–330, 1988.

    Article  Google Scholar 

  15. Skubiszewski, M. “A hardware emulator for binary neural networks.” In International Neural Network Conference, Vol. 2, pp. 555–558, Paris, 1990.

    Google Scholar 

  16. Svensson, B. and T. Nordström. “Execution of neural network algorithms on an array of bit-serial processors.” In 10th International Conference on Pattern Recognition, Computer Architectures for Vision and Pattern Recognition, Vol. II, pp. 501–505, Atlantic City, New Jersey, USA, 1990.

    Google Scholar 

  17. Thinking Machines Corporation. “C* User's guide and C* Programming Guide.” (Version 6.0), T M C Cambridge, Massachusetts, 1990.

    Google Scholar 

  18. Unnebäck, M. “Gate array implementations of processing elements for a reconfigurable, modular, massively parallel SIMD computer.” (Masters Thesis 1991:117 E), Luleå University of Technology, 1991. [In Swedish]

    Google Scholar 

  19. Van den Bout, D. E., J. N. Morris, D. Thomae, S. Labrozzi, S. Wingo and D. Hallman. “AnyBoard: An FPGA-based, reconfigurable system.” IEEE Design & Test of Computers. (September): pp. 21–30, 1992.

    Google Scholar 

  20. Van den Bout, D. E., W. Snyder and T. K. Miller III. “Rapid prototyping for neural networks.” Advanced Neural Computers. Eckmiller ed. North-Holland. Amsterdam. 1990.

    Google Scholar 

  21. Wolff, H. “How Quickturn is filling a gap.” Electronics. (April): 1990.

    Google Scholar 

  22. XILINX. The Programmable Gate Array Data Book. 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Herbert Grünbacher Reiner W. Hartenstein

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Linde, A., Nordström, T., Taveniku, M. (1993). Using FPGAs to implement a reconfigurable highly parallel computer. In: Grünbacher, H., Hartenstein, R.W. (eds) Field-Programmable Gate Arrays: Architecture and Tools for Rapid Prototyping. FPL 1992. Lecture Notes in Computer Science, vol 705. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57091-8_45

Download citation

  • DOI: https://doi.org/10.1007/3-540-57091-8_45

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57091-2

  • Online ISBN: 978-3-540-47902-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics