Advertisement

Scalable and Modular Scheduling

  • Paul Feautrier
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3133)

Abstract

Scheduling a program (i.e. constructing a timetable for the execution of its operations) is one of the most powerful methods for automatic parallelization. A schedule gives a blueprint for constructing a synchronous program, suitable for an ASIC or a VLIW processor. However, constructing a schedule entails solving a large linear program. Even if one accepts the (experimental) fact that the Simplex is almost always polynomial, the scheduling time is of the order of a large power of the program size and of the maximum nesting level of its loops. Hence the method is not scalable. The present paper presents two methods for improving this situation. Firstly, a big program can be divided into smaller units (processes) which can be scheduled separately. This is modular scheduling. Second, one can use projection methods for solving linear programming problems incrementally. This is especially efficient if the dependence graph is sparse.

Keywords

Schedule Problem Parallel Programming Dependence Graph Systolic Array Constraint Matrix 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ancourt, C., Irigoin, F.: Scanning polyhedra with DO loops. In: Proc. third SIGPLAN Symp. on Principles and Practice of Parallel Programming, pp. 39–50. ACM Press, New York (1991)CrossRefGoogle Scholar
  2. 2.
    Quilleré, F., Rajopadhye, S., Wilde, D.: Generation of Efficient Nested Loops from Polyhedra. International Journal of Parallel Programming 28, 469–498 (2000)CrossRefGoogle Scholar
  3. 3.
    Bastoul, C.: Efficient code generation for automatic parallelization and optimization. In: ISPDC 2003 IEEE International Symposium on Parallel and Distributed Computing, (2003) (to appear) , http://www.prism.uvsq.fr/users/cedb/
  4. 4.
    Kahn, G.: The semantics of a simple language for parallel programming. In: Holland, N. (ed.) IFIP 1994, pp. 471–475 (1974)Google Scholar
  5. 5.
    Hoare, C.A.R.: Communicating sequential processes. Communications of the ACM 21 (1978)Google Scholar
  6. 6.
    Feautrier, P.: Some efficient solutions to the affine scheduling problem, I, one dimensional time. Int. J. of Parallel Programming 21, 313–348 (1992)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Bernstein, A.J.: Analysis of programs for parallel processing. IEEE Trans. on El. Computers EC-15 (1966)Google Scholar
  8. 8.
    Schrijver, A.: Theory of linear and integer programming. Wiley, NewYork (1986)Google Scholar
  9. 9.
    Feautrier, P.: Some efficient solutions to the affine scheduling problem, II, multidimensional time. Int. J. of Parallel Programming 21, 389–420 (1992)zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Quinton, P.: The systematic design of systolic arrays. In: Fogelman, F., Robert, Y., Tschuente, M. (eds.) Automata networks in Computer Science, pp. 229–260. Manchester University Press ( (1987)Google Scholar
  11. 11.
    Feautrier, P.: Semantical analysis and mathematical programming; application to parallelization and vectorization. In: Cosnard, M., Robert, Y., Quinton, P., Raynal, M. (eds.) Workshop on Parallel and Distributed Algorithms, pp. 309–320. North Holland, Bonas (1989)Google Scholar
  12. 12.
    Wilde, D.: A library for doing polyhedral operations. Technical Report 785, Irisa, Rennes, France (1993)Google Scholar
  13. 13.
    Triolet, R., Irigoin, F., Feautrier, P.: Automatic parallelization of FORTRAN programs in the presence of procedure calls. In: Robinet, B., Wilhelm, R. (eds.) ESOP 1986. LNCS, vol. 213, Springer, Heidelberg (1986)Google Scholar
  14. 14.
    Quinton, P., Risset, T.: Structured scheduling of recurrence equations: Theory and practice. In: Deprettere, F., Teich, J., Vassiliadis, S. (eds.) SAMOS 2001. LNCS, vol. 2268, p. 112. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  15. 15.
    Leverge, H., Mauras, C., Quinton, P.: The alpha language and its use for the design of systolic arrays. Journal of VLSI Signal Processing 3, 173–182 (1991)CrossRefGoogle Scholar
  16. 16.
    Lefebvre, V., Feautrier, P.: Optimizing storage size for static control programs in automatic parallelizers. In: Lengauer, C., Griebl, M., Gorlatch, S. (eds.) Euro-Par 1997. LNCS, vol. 1300, pp. 356–363. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  17. 17.
    Darte, A., Schreiber, R., Villard, G.: Lattice-based memory allocation. In: 6th ACM International Conference on Compilers, Architectures and Synthesis for Embedded Systems, CASES 2003 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Paul Feautrier
    • 1
  1. 1.LIP, Ecole Normale Superieure de LyonLyon Cedex 07France

Personalised recommendations