Data I/O Minimization for Loops on Limited Onchip Memory Processors

  • Lei Wang
  • Santosh Pande
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1863)


Due to significant advances in VLSI technology, ‘mega-processors’ made with a large number of transistors has become a reality. These processors typically provide multiple functional units which allow exploitation of parallelism. In order to cater to the data demands associated with parallelism, the processors provide a limited amount of on-chip memory. The amount of memory provided is quite limited due to higher area and power requirements associated with it. Even though limited, such on-chip memory is a very valuable resource in memory hierarchy. An important use of on-chip memory is to hold the instructions from short loops along with the associated data for very fast computation. Such schemes are very attractive on embedded processors where, due to the presence of dedicated hard-ware on-chip (such as very fast multipliers-shifters etc.) and extremely fast accesses to on-chip data, the computation time of such loops is extremely small meeting almost all real-time demands. Biggest bottleneck to performance in these cases are off-chip accesses and thus, compilers must carefully analyze references to identify good candidates for promotion to on-chip memory. In our earlier work [6], we formulated this problem in terms of 0/1 knapsack and proposed a heuristic solution that gives us good promotion candidates. Our analysis was limited to a single loop nest. When we attempted extending this framework to multiple loop nests (intra-procedurally), we realized that not only it is important to identify good candidates for promotion but a careful restructuring of loops must be undertaken before performing promotion since data i/o of loading and storing values to on-chip memory poses a significant bottleneck.


Memory Hierarchy Loop Fusion Program Dependence Graph Multiple Functional Unit Closeness Factor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    G. Gao, R. Olsen, V. Sarkar, and R. Thekkath. Collective loop fusion for array contraction. In Languages and Compilers for Parallel Computing (LCPC), 1992.Google Scholar
  2. 2.
    K. Kennedy and K. McKinley. Maximizing loop parallelism and improving data locality via loop fusion and distribution. In Languages and Compilers for Parallel Computing (LCPC), 1993.Google Scholar
  3. 3.
    I. Kodukula, N. Ahmed, and K. Pingali. Data centric multi-level blocking. In ACM Programming Language Design and Implementation(PLDI), pages 346–357, 1997.Google Scholar
  4. 4.
    N. Mitchell, K. Hogstedt, L. Carter, and J. Ferrante. Quantifying the multi-level nature of tiling interactions. In International Journal of Parallel Programming, volume 26:6, pages 641–670, 1998.CrossRefGoogle Scholar
  5. 5.
    R. Schreiber and J. Dongarra. Automatic blocking of nested loops. In Technical report, RIACS, NASA Ames Research Center, and Oak Ridge National Laboratory, May 1990.Google Scholar
  6. 6.
    A. Sundaram and S. Pande. An efficient data partitioning method for limited memory embedded systems. In ACM SIGPLAN Workshop on Languages, Compilers and Tools for Embedded Systems(LCTES)(in conjunction with PLDI’ 98), Montreal, Canada, Springer-Verlag, pages 205–218, 1998.Google Scholar
  7. 7.
    M. Wolfe. Iteration space tiling for memory hierarchies. In Third SIAM Conference on Parallel Processing for Scientific Computing, December 1987.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Lei Wang
    • 1
    • 2
  • Santosh Pande
    • 1
    • 2
  1. 1.Compiler Research LabUSA
  2. 2.Department of Electrical & Computer Engineering and Computer ScienceUniversity of CincinnatiCincinnatiUSA

Personalised recommendations