Atredia: A Mapping Environment for Dynamic Tree-Structured Problems

  • Angela Sodan


Problems with dynamic tree-structured behavior are usually highly irregular with respect to the shape of potential processes. Such problems require a special mapping on to a parallel machine, with appropriate partitioning of the tree as well as dynamic load balancing. Atredia is an environment providing several tools for mapping dynamic tree-structured problems. These include a granularity controller (for partitioning), a load balancer, a scheduler, and a profiler. Innovative features of Atredia are its support of selection from a set of given granularity-control and load-balancing strategies and their parameterization according to the characteristics of the respective application, the fact that it uses explicit granularity control at all, and its use of a systematic and formalized approach. The latter is realized by performing classifications and calculations based on a model of the application and the machine, and by obtaining dynamic behavior characteristics of the specific application via profiling. One of Atredia’s main aims is applicability to large real-life problems.


Load Balance Critical Path Mapping Environment Task Creation Dynamic Load Balance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Ishfaq Ahmad. A Semi Distributed Task Allocation Strategy for Large Hybercube Supercomputers, IEEE Supercomputing 1990.Google Scholar
  2. [2]
    Robert S. Boyer and J. Strother Moore: A Computational Logic. Academic Press, New York, 1988.Google Scholar
  3. [3]
    Heiko Bock. Konzeption und Implementierung eines Profilers zur Gewinnung von symbolischen Anwendungen. Diploma thesis, Technical University Berlin, 1994.Google Scholar
  4. [4]
    Christophe Coroyer and Zhen Liu. Effectiveness of Heuristics and Simulated Annealing for the Scheduling of Concurrent Tasks: An Empirical Comparison. Proc. PARLE’93, Parallel Architectures and Languages Europe, Springer-Verlag, 1993.Google Scholar
  5. [5]
    Masakazu Furuichi, Kazuo Taki, Nobuyuki Ichiyoshi. A Multi-Level Load Balancing Scheme for OR-Parallel Exhaustive Search Programs on the Multi-PSI. PPOPP 1990.Google Scholar
  6. [6]
    Franz Incorporated. Allegro Composer, Franz Inc., 1990.Google Scholar
  7. [7]
    Richard P. Gabriel. Performance and Evaluation of Lisp Systems. MIT Press, 1985.Google Scholar
  8. [8]
    Wolfgang K. Giloi. From SUPRENUM to MANNA and META - Parallel Computer Development at GMD FIRST. Proc. 1994 Mannheim Super-computing Seminar, Sauer-Verlag, Munich 1994.Google Scholar
  9. [9]
    Peter Kabat. Parallelisierung des Boyer-Moore Theorembeweisers. Bachelor thesis, Technical University Berlin, 1994.Google Scholar
  10. [10]
    Vipin Kumar and Anshul Gupta. Analyzing Scalability of Parallel Algorithms and Architectures, Journal of Parallel and Distr. Computing, 1994.Google Scholar
  11. [11]
    Vipin Kumar and Anshul Gupta. Scalable Load Balancing Techniques for Parallel Computers, Journal of Parallel and Distributed Computing, 1994.Google Scholar
  12. [12]
    V. Kumar, A. Grama, A. Gupta, and G. Karypis. Introduction to Parallel Computing - Design and Analysis of Algorithms. Benjamin/Cummings Publ. Company, 1994.Google Scholar
  13. [13]
    L.V. Kale and S. Krishnan. CHARM++: A Portable Concurrent Object Oriented System Based on C++. OOPSLA93.Google Scholar
  14. [14]
    L.V. Kale, B. Ramkumar, V. Saletore, and A.B. Sinha. Prioritization in Parallel Symbolic Computing. In Halstead and Ito (eds.), Proc. US/Japan Workshop on Parallel Symbolic Computing: Languages, Systems, and Applications, Oct. 1992, Springer-Verlag, 1993.Google Scholar
  15. [15]
    Wolfgang Küchlin, Universität Tübingen, private communication, Sept. 1994.Google Scholar
  16. [16]
    Eric Mohr, David A. Kranz, and Robert H. Halstead. Lazy Task Creation: A Technique for Increasing the Granularity of Parallel Programs. Proceedings ACM Conference on Lisp and Functional Programming, 1990.Google Scholar
  17. [17]
    Brian Reistad and David K. Gifford. Static Dependent Costs for Estimating Execution Time. ACM Conf. on Lisp and Functional Programming, June 1994.Google Scholar
  18. [18]
    A. Reinefeld and V. Schnecke. Work-Load Balancing in Highly Parallel Depth-First Search. Proc. SHPCC’94, Knoxville/Tennessee, May 1994.Google Scholar
  19. [19]
    Vivek Sarkar. Partitioning and Scheduling Parallel Programs for Multiprocessors. MIT Press, 1989.Google Scholar
  20. [20]
    Angela Sodan and Hua Bi. A Semi-Automatic Approach for Paralleli-zing Symbolic Processing Programs First Int. Symp. on Parallel Symbolic Comp., Linz/ Austria, Sept. 1994.Google Scholar
  21. [21]
    Amitabh B. Sinha and Laxmikant Kale. A Load Balancing Strategy for Prioritized Execution of Tasks. Internat. Parallel Processing Symposium, Los Angeles/CA, April 1993.Google Scholar
  22. [22]
    Amitabh B. Sinha and Kaxmikant V. Kale. A framework for intelligent performance feedback. ICPP-94.Google Scholar
  23. [23]
    Angela Sodan. Parallelisierung von Lisp - Inwieweit bieten deklarative Sprachmittel Vorteile? Workshop on “Entwicklung, Test und Wartung deklarativer KI-Programme” at 18th German Ann. Conf. on AI, Saarbrücken, Sept. 1994, Springer Press (short version), GMD-Berichte (long version).Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1995

Authors and Affiliations

  • Angela Sodan
    • 1
  1. 1.GMD Institute for Computer Architecture and Software TechnologyBerlinGermany

Personalised recommendations