Compiler Optimizations for Scalable Parallel Systems

Languages, Compilation Techniques, and Run Time Systems

  • Santosh Pande
  • Dharma P. Agrawal

Part of the Lecture Notes in Computer Science book series (LNCS, volume 1808)

Table of contents

  1. Front Matter
    Pages I-XIX
  2. Languages

    1. Ken Kennedy, Charles Koelbel
      Pages 3-43
    2. Jean-Luc Gaudiot, Tom DeBoni, John Feo, Wim Böhm, Walid Najjar, Patrick Miller
      Pages 45-72
    3. Dennis Gannon, Peter Beckman, Elizabeth Johnson, Todd Green, Mike Levine
      Pages 73-107
  3. Analysis

    1. Alain Darte, Yves Robert, Frédéric Vivien
      Pages 141-171
    2. Paul Feautrier
      Pages 173-219
    3. Zhiyuan Li, Junjie Gu, Gyungho Lee
      Pages 221-246
    4. Peng Tu, David Padua
      Pages 247-281
  4. Communication Optimizations

    1. Anant Agarwal, David Kranz, Rajeev Barua, Venkat Natarajan
      Pages 285-338
    2. Kuei-Ping Shih, Chua-Huang Huang, Jang-Ping Sheu
      Pages 339-383
    3. Vladimir Kotlyar, David Bau, Induprakas Kodukula, Keshav Pingali, Paul Stodghill
      Pages 385-411
    4. Daniel J. Palermo, Eugene W. Hodges IV, Prithviraj Banerjee
      Pages 445-484
    5. Andrew Sohn, Yuetsu Kodama, Jui-Yuan Ku, Mitsuhisa Sato, Yoshinori Yamaguchi
      Pages 525-549
  5. Code Generation

  6. Task Parallelism, Dynamic Data Structures and Run Time Systems

    1. Sekhar Darbha, Dharma P. Agrawal
      Pages 649-682
    2. Martin C. Carlisle, Anne Rogers
      Pages 709-749
    3. Raja Das, Yuan-Shin Hwang, Joel Saltz, Alan Sussman
      Pages 751-778
  7. Back Matter
    Pages 779-779

About this book


Scalable parallel systems or, more generally, distributed memory systems offer a challenging model of computing and pose fascinating problems regarding compiler optimization, ranging from language design to run time systems. Research in this area is foundational to many challenges from memory hierarchy optimizations to communication optimization.
This unique, handbook-like monograph assesses the state of the art in the area in a systematic and comprehensive way. The 21 coherent chapters by leading researchers provide complete and competent coverage of all relevant aspects of compiler optimization for scalable parallel systems. The book is divided into five parts on languages, analysis, communication optimizations, code generation, and run time systems. This book will serve as a landmark source for education, information, and reference to students, practitioners, professionals, and researchers interested in updating their knowledge about or active in parallel computing.


Compiler Optimization Distributed Memory Systems High-Performance Computing Parallel Algorithms Parallel Computing Parallelization Program Optimization Run Time Systems Scala Scalable Parallel Systems compiler optimization

Editors and affiliations

  • Santosh Pande
    • 1
  • Dharma P. Agrawal
    • 2
  1. 1.College of ComputingGeorgia Institute of TechnologyAtlantaUSA
  2. 2.Department of ECECSUniversity of CincinnatiCincinnatiUSA

Bibliographic information

  • DOI
  • Copyright Information Springer-Verlag Berlin Heidelberg 2001
  • Publisher Name Springer, Berlin, Heidelberg
  • eBook Packages Springer Book Archive
  • Print ISBN 978-3-540-41945-7
  • Online ISBN 978-3-540-45403-8
  • Series Print ISSN 0302-9743
  • Buy this book on publisher's site
Industry Sectors
Chemical Manufacturing
IT & Software
Finance, Business & Banking
Energy, Utilities & Environment