A Runtime System for Dynamic DAG Programming

  • Min-You Wu
  • Wei Shu
  • Yong Chen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1800)


A runtime system is described here for dynamic DAG execution. A large DAG which represents an application program can be executed on a parallel system without consuming large amount of memory space. A DAG scheduling algorithm has been parallelized to scale to large systems. Inaccurate estimation of task execution time and communication time can be tolerated. Implementation of this parallel incremental system demonstrates the feasibility of this approach. Preliminary results show that it is superior to other approaches.


Schedule Algorithm Memory Space Gaussian Elimination Task Graph Source Domain 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    I. Ahmad, Y.K. Kwok, and M. Y. Wu. Performance comparison of algorithms for static scheduling of DAGs to multiprocessors. In Second Australasian Conference on Parallel and Real-time Systems, pages 185–192, September 1995.Google Scholar
  2. 2.
    M. Cosnard, E. Jeannnot, and L. Rougeot. Low memory cost dynamic scheduling of large coarse grain task graphs. In International Parallel Processing Symposium, April 1998.Google Scholar
  3. 3.
    H. El-Rewini and T. G. Lewis. Scheduling parallel program tasks onto arbitrary target machines. Journal of Parallel and Distributed Computing, June 1990.Google Scholar
  4. 4.
    M.R. Gary and D.S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman and Company, 1979.Google Scholar
  5. 5.
    M. Y. Wu. Parallel incremental scheduling. Parallel Processing Letters, 5(4):659–670, December 1995.MathSciNetCrossRefGoogle Scholar
  6. 6.
    M. Y. Wu and D. D. Gajski. Hypertool: A programming aid for message-passing systems. IEEE Trans. Parallel and Distributed Systems, 1(3):330–343, July 1990.CrossRefGoogle Scholar
  7. 7.
    M. Y. Wu and W. Shu. On parallelization of static scheduling algorithms. IEEE Transactions on Software Engineering, 23(8):517–528, August 1997.MathSciNetCrossRefGoogle Scholar
  8. 8.
    M. Y. Wu, W. Shu, and Y. Chen. Incremental scheduling and execution of dags. In IASTED International Conference on Parallel and Distributed Computing Systems, November 1999.Google Scholar
  9. 9.
    T. Yang and A. Gerasoulis. DSC: Scheduling parallel tasks on an unbounded number of processors. IEEE Trans. Parallel and Distributed System, 5(9):951–967, September 1994.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Min-You Wu
    • 1
  • Wei Shu
    • 1
  • Yong Chen
    • 2
  1. 1.Department of ECEUniversity of New MexicoUSA
  2. 2.Department of ECEUniversity of Central FloridaUSA

Personalised recommendations