Advertisement

Task-Parallel Programming of Reconfigurable Systems

  • Markus Weinhardt
  • Wayne Luk
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2147)

Abstract

This paper presents task-parallel programming, a style of application development for reconfigurable systems. Task-parallel programming enables efficient interaction between concurrent hardware and software tasks. In particular, it supports description of communication and computation tasks running in parallel to allow effective implementation of designs where data transfer time between hardware and software components is comparable to computation time. This approach permits precise specification of parallelism without requiring hardware design knowledge. We present language extensions for task-parallel programming, inspired by the occam and Handel languages. A compilation scheme for this method is described: the four main stages are memory mapping, channel implementation, software generation and hardware synthesis. Our techniques have been evaluated using video applications on the RC1000-PP hardware platform.

Keywords

Shared Memory Parallel Task Memory Bank Communicate Sequential Process Host Processor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    M. B. Gokhale and J. M. Stone. NAPA C: compiling for a hybrid RISC/FPGA architecture.In Proc. FPGAsfor Custom Computing Machines. IEEE Computer Society Press, 1998.Google Scholar
  2. [2]
    M. Weinhardt and W. Luk. Pipeline vectorization. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, February 2001.Google Scholar
  3. [3]
    D. A. Buell, J. M. Arnold, and W. J. Kleinfelder. Splash 2-FPGAs in a Custom Computing Machine. IEEE Computer Society Press, 1996.Google Scholar
  4. [4]
    Celoxica Limited. Homepage http://www.celoxica.com.
  5. [5]
    P. M. Athanas and H. F. Silverman. Processor reconfiguration through instruction-set metamorphosis. IEEE Computer, 26, March 1993.Google Scholar
  6. [6]
    J. R. Hauser and J. Wawrzynek. Garp: A MIPS processor with a reconfigurable coprocessor.In Proc. FPGAsfor Custom Computing Machines. IEEE Computer Society Press, 1997.Google Scholar
  7. [7]
    C. R. Rupp, M. Landguth, T. Garverick, E. Gomersall and H. Holt. The NAPA adaptive processing architecture. In Proc. FPGAsfor Custom Computing Machines. IEEE Computer Society Press, 1998.Google Scholar
  8. [8]
    C. A. R. Hoare. Communicating Sequential Processes. Prentice-Hall International, 1985.Google Scholar
  9. [9]
    C. A. R. Hoare and I. Page. Hardware and software: The closing gap. Transputer Communications, 2(2), June 1994.Google Scholar
  10. [10]
    E. Barros and A. Sampaio. Towards provably correct hardware/software partitioning using occam. In Proc. Int. Workshop on Hardware/Software Codesign. IEEE Computer Society Press, 1994.Google Scholar
  11. [11]
    I. Page and W. Luk. Compiling Occam into FPGAs. In FPGAs. Abingdon EE&CS Books, 1991.Google Scholar
  12. [12]
    H. R. Myler and A. R. Weeks. Computer Imaging Recipes in C. Prentice Hall, 1993.Google Scholar
  13. [13]
    M. Weinhardt and W. Luk. Memory access optimization and RAM inference for pipeline vectorization. In Field Programmable Logic and Applications, LNCS 1673. Springer, 1999.Google Scholar
  14. [14]
    The Stanford SUIF Compiler Group. Homepage http://suif.stanford.edu.

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Markus Weinhardt
    • 1
  • Wayne Luk
    • 2
  1. 1.PACT GmbHMunichGermany
  2. 2.Department of ComputingImperial CollegeLondonUK

Personalised recommendations