A Graph-Oriented Task Manager for Small Multiprocessor Systems

  • Xavier Verians
  • Jean-Didier Legat
  • Jean-Jacques Quisquater
  • Benoit Macq
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1685)


A task manager that dynamically decodes the data-dependent task graph is a key component of general multiprocessor systems. The emergence of small-scale parallel systems for multimedia and general-purpose applications requires the extraction of complex parallelism patterns. The small system size also allows the centralization of the task generation and synchronization. This paper proposes such a task manager. It uses a structured representation of the task dependence graph to issue and synchronize tasks. We describe several optimizations to extract more parallelism, discuss the software/hardware implementation issue and show it produces efficient parallelism exploitation in case of applications with complex parallelism patterns.


parallelism dependence graph synchronization multiprocessors 


  1. [1]
    Hamidzadeh, B., Lilja, D.J.: Dynamic Scheduling Strategies for Shared-Memory Multiprocessors. Proc. of the 16th Int. Conf. on Distributed Computing Systems, Hong Kong, 1996, 208–215Google Scholar
  2. [2]
    Johnson, T., Davis T., Hadfield, S.: A Concurrent Dynamic Task Graph. Parallel Computing, Vol 22, No 2, Ferbruary 1996, 327–333Google Scholar
  3. [3]
    Park, G.-L., Shirazi, B., Marquis J., Choo, H.: Decisive Path Scheduling: A New List Scheduling Method. Proc. of the Int. Conf. on Parallel Processing, 1997, 472–480Google Scholar
  4. [4]
    Verians, X., Legat J.-D., Quisquater, J.-J., Macq, B.: A New Parallelism Management Scheme for Multiprocessor Systems Proc. of the 4th Int. ACPC Conf., Salzburg, Austria, Lecture Notes in Computer Science, Vol 1557, February 1996, 246–256Google Scholar
  5. [5]
    Nikhil, R.S., Papadopoulos, G.M., Arvind: *T: A Multithreaded Massively Parallel Architecture. Int. Symp. on Computer Architecture, 1992, 156–167Google Scholar
  6. [6]
    Kai Hwan, Zhiwei Xu: Scalable Parallel Computing. McGraw-Hill, 1998Google Scholar
  7. [7]
    Singh, J.P., Weber W-D., Gupta, A.: SPLASH: Stanford Parallel Applications for Shared Memory. Computer Architecture News, 20(1), July 1994, 5–44Google Scholar
  8. [8]
    Woo, S.C., Ohara, M., Torrie, E., Singh, J.P., Gupta, A.: The SPLASH-2 Programs: Characterization and Methodological Considerations. Proc. of the 22nd Ann. Symp. on Computer Architecture, June 1995, 24–36Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Xavier Verians
    • 1
  • Jean-Didier Legat
    • 1
  • Jean-Jacques Quisquater
    • 1
  • Benoit Macq
    • 2
  1. 1.Microelectronics LaboratoryUniversité Catholique de LouvainLouvainBelgium
  2. 2.Telecommunications LaboratoryUniversité Catholique de LouvainLouvain-la-NeuveBelgium

Personalised recommendations