Advertisement

An Emulator for Exploring RaPiD Configurable Computing Architectures

  • Chris Fisher
  • Kevin Rennie
  • Guanbin Xing
  • Stefan G. Berg
  • Kevin Bolding
  • John Naegle
  • Daniel Parshall
  • Dmitriy Portnov
  • Adnan Sulejmanpasic
  • Carl Ebeling
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2147)

Abstract

The RaPiD project at the University of Washington has been studying configurable computing architectures optimized for coarse-grained data and computation units and deep computation pipelines. This research targets applications in the signal and image-processing domain since they make the greatest demand for computation and power in embedded and mobile computing applications, and these demands are increasing faster than Moore’s law. This paper describes the RaPiD Emulator, a system that will allow the exploration of alternative configurable architectures in the context of benchmark applications running in real-time. The RaPiD emulator provides enough FPGA gates to implement large RaPiD arrays, along with a high-performance streaming memory architecture and high-bandwidth data interfaces to a host processor and external devices. Running at 50 MHz, the emulator is able to achieve over 1 GMACs/second.

Keywords

Data Stream Functional Unit Memory Module Address Stream Stream Interface 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ebeling, C. and Cronquist, D. C. and Franklin, P. and Berg, S., “Mapping applications to the RaPiD configurable architecture”, Field-Programmable Custom Computing Machines (FCCM-97), April 1997, pp. 106–15.Google Scholar
  2. 2.
    Darren C. Cronquist, Paul Franklin, Chris Fisher, Miguel Figueroa, and Carl Ebeling. “Architecture Design of Reconfigurable Pipelined Datapaths,” Twentieth Anniversary Conference on Advanced Research in VLSI, 1999.Google Scholar
  3. 3.
    Kung, H. T. “Let’s design algorithms for VLSI systems”. Tech Report CMU-CS-79-151, Carnegie-Mellon University, January, 1979.Google Scholar
  4. 4.
    Moldovan, D. I. and Fortes, J. A. B., “Partitioning and mapping algorithms into fixed size systolic arrays”, IEEE Transactions on Computers, 1986, pp. 1–12.Google Scholar
  5. 5.
    Lee, P. and Kedem, Z. M., “Synthesizing linear array algorithms from nested FOR loop algorithms”, IEEE Transactions on Computers, 1988, pp. 1578–98.Google Scholar
  6. 6.
    Gold, B. and Bially, T., “Parallelism in fast Fourier transform hardware”, IEEE Transactions on Audio and Electroacoustics, vol. AU-21 no. 1, Feb. 1973, pp. 5–16.CrossRefGoogle Scholar
  7. 7.
    Xilinx. “Virtex and Virtex-E Overview.” http://www.xilinx.com/xlnx/xil_prodcat_product.jsp?title=ss_vir (16 Mar. 2001).
  8. 8.
    Intel. “StrongARM SA-110 Multimedia Development Board with Companion SA-1101 Developement Board”. Order Number: 278253-001 Jan. 1999. ftp://download.intel.com/design/strong/datashts/27825301.pdf (16 Mar. 2001).

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Chris Fisher
    • 1
  • Kevin Rennie
    • 1
  • Guanbin Xing
    • 1
  • Stefan G. Berg
    • 1
  • Kevin Bolding
    • 1
  • John Naegle
    • 1
  • Daniel Parshall
    • 1
  • Dmitriy Portnov
    • 1
  • Adnan Sulejmanpasic
    • 1
  • Carl Ebeling
    • 1
  1. 1.Department of Computer Science and EngineeringUniversity of WashingtonSeattle

Personalised recommendations