Advertisement

Bridging the Gap between Compilation and Synthesis in the DEFACTO System

  • Pedro Diniz
  • Mary Hall
  • Joonseok Park
  • Byoungro So
  • Heidi Ziegler
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2624)

Abstract

The DEFACTO project – a Design Environment For Adaptive Computing TechnOlogy – is a system that maps computations, expressed in high-level languages such as C, directly onto FPGA-based computing platforms. Major challenges are the inherent flexibility of FPGA hardware, capacity and timing constraints of the target FPGA devices, and accompanying speed-area trade-offs. To address these, DEFACTO combines parallelizing compiler technology with behavioral VHDL synthesis tools, obtaining the complementary advantages of the compiler’s high-level analyses and transformations and synthesis’ binding, allocation and scheduling of low-level hardware resources. To guide the compiler in the search of a good solution, we introduce the notion of balance between the rates at which data is fetched from memory and accessed by the computation, combined with estimation from behavioral synthesis. Since FPGA-based designs offer the potential for optimizing memory-related operations, we have also incorporated the ability to exploit parallel memory accesses and customize memory access protocols into the compiler analysis.

Keywords

Memory Access Loop Nest External Memory Loop Body Synthesis Tool 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J. Anderson, S. Amarasinghe, and M. Lam, “Data and Computation Transformations for Multiprocessors,” in In Proc. of the ACM Symp. on Principles and Practice of Parallel Programming (PPoPP’95), Jul. 1995, pp. 19–21, ACM Press.Google Scholar
  2. 2.
    Annapolis Micro Systems Inc., “WildStar™ Reconfigurable Computing Engines”. newblock User’s Manual R3.3, 1999.Google Scholar
  3. 3.
    J. Babb, M. Rinard, A. Moritz, W. Lee, M. Frank, R. Barua and S. Amarasinghe, “Parallelizing Applications into Silicon”, in Proc. of the IEEE Symposium on FPGAs for Custom Computing Machines (FCCM’99), IEEE Computer Society Press, Los Alamitos, 1999, pp. 70–81.Google Scholar
  4. 4.
    R. Barua, W. Lee, S. Amarasinghe, and A. Agarwal, “Memory bank disambiguation using modulo unrolling for Raw machines,” in In Proc. of the ACM/IEEE Fifth Int’l Conference on High Performance Computing(HIPC), Dec. 1998.Google Scholar
  5. 5.
    S. Carr and K. Kennedy, “Improving the ratio of memory operations to floating-point operations in loops,” ACM Transactions on Programming Languages and Systems, vol. 15, no. 3, pp. 400–462, July 1994.Google Scholar
  6. 6.
    D. Cronquist, P. Franklin, S. Berg, and C. Ebeling, “Specifying and compiling applications for RaPiD,” in In Proc. IEEE Symp. on FPGAs for Custom Computing Machines (FCCM’98). 1998, pp. 116–125, IEEE Press.Google Scholar
  7. 7.
    P. Diniz and J. Park, “Automatic synthesis of data storage and contol structures for FPGA-based computing machines,” in In Proc. IEEE Symp. on FPGAs for Custom Computing Machines (FCCM’00). Apr. 2000, pp. 91–100, IEEE Press.Google Scholar
  8. 8.
    M. Gokhale and J. Stone, “Automatic Allocation of Arrays to Memories in FPGA Processors with Multiple Memory Banks”. in Proc. of the IEEE Symp. on FPGAs for Custom Computing Machines (FCCM’99), IEEE Computer Society Press, Los Alamitos, 1999, pp. 63–69.Google Scholar
  9. 9.
    M. Gokhale and J. Stone, “Napa-C: Compiling for a hybrid RISC/FPGA architecture,” in Proc. of the IEEE Symp. on FPGAs for Custom Computing Machines (FCCM’98). 1998, pp. 126–135, IEEE Computer Society Press.Google Scholar
  10. 10.
    S. Goldstein, H. Schmit, M. Moe, M. Budiu, S. Cadambi, R. Taylor, and R. Laufer, “PipeRench: A coprocessor for streaming multimedia acceleration,” in Proc. of 26th Intl. Symp. on Computer Architecture (ISCA’99). 1999, pp. 28–39, ACM Press.Google Scholar
  11. 11.
    J. Hauser and J. Wawrzynek, “Garp: A MIPS processor with a reconfigurable coprocessor,” in Proc. of the IEEE Symp. on FPGAs for Custom Computing Machines. 1997, pp. 12–21, IEEE Computer Society Press.Google Scholar
  12. 12.
    “Mentor Graphics Inc.”, “Monet™” User’s Manual R43.Google Scholar
  13. 13.
    S. Muchnick, Advanced Compiler Design and Implementation, Morgan Kaufmann, San Fransisco, Calif., 1997.Google Scholar
  14. 14.
    J. Park and P. Diniz, “Synthesis of memory access controller for streamed data applications for FPGA-based computing engines,” in Proc. of the 14th Intl. Symp. on System Synthesis (ISSS’2001). Oct. 2001, IEEE Computer Society Press.Google Scholar
  15. 15.
    “The Stanford SUIF Compilation System,” Public Domain Software and Documentation.Google Scholar
  16. 16.
    M. Weinhardt and W. Luk, “Pipelined vectorization for reconfigurable systems,” in Proc. of the IEEE Symp. on FPGAs for Custom Computing Machines (FCCM’99). 1999, pp. 52–62, IEEE Computer Society Press.Google Scholar
  17. 17.
    M. Wolf and M. Lam, “A Loop Transformation Theory and an Algorithm for Maximizing Parallelism”, In IEEE Trans. on Parallel and Distributed Systems, Oct. 1991.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Pedro Diniz
    • 1
  • Mary Hall
    • 1
  • Joonseok Park
    • 1
  • Byoungro So
    • 1
  • Heidi Ziegler
    • 1
  1. 1.University of Southern California / Information Sciences InstituteMarina del Rey

Personalised recommendations