Advertisement

Abstract

The PVM development team continues to do distributed virtual machine research. Today that research revolves around the PVM project follow-on called Harness. Every three years the team chooses a new direction to explore. This year marks the beginning of a new cycle and this talk will describe the new directions and software that the PVM/Harness research team will developing over the next few years.

The first direction involves the use of Harness technology in a DOE project to develop a scalable Linux OS suited to petascale computers. Harness distributed control will be leveraged to increase the fault tolerance of such an OS. The second direction involves the use of the Harness runtime environment in the new Open MPI software project. Open MPI is an integration of several previous MPI implementations, including LAM-MPI, the LA-MPI package, and the Fault Tolerant MPI from the Harness project. The third research direction is called the ”Harness Workbench” and will investigate the creation of a unified and adaptive application development environment across diverse computing platforms. This research effort will leverage the dynamic plug-in technology developed in our Harness research. Each of these three research efforts will be described in detail.

Finally the talk will describe the latest news on DOE’s National Leadership Computing Facility, which will house a 100 TF Cray system at ORNL, and an IBM Blue Gene system at Argonne National Lab. We will describe the scientific missions of this facility and the new concept of ”computational end stations” being pioneered by the Facility.

Keywords

Virtual Machine Runtime System Late News Computer Science Researcher Distribute Computing Framework 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Geist, G., et al.: Harness (2003), www.csm.ornl.gov/harness
  2. 2.
    Kurzyniec, D., et al.: Towards Self-Organizing Distributed Computing Frameworks: The H2O Approach. International Journal of High Performance Computing (2003)Google Scholar
  3. 3.
    Scott, S., et al.: MOLAR: Modular Linux and Adaptive Runtime Support for High-end Computing Operating and Runtime Systems (2005), http://forge-fre.ornl.gov/molar
  4. 4.
    Gabriel, E., et al.: Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation. In: Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest, Hungary (September 2004), http://www.open-mpi.org
  5. 5.
    Nichols, J., et al.: National Leadership Computing Facility (2005), www.ccs.ornl.gov

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Al Geist
    • 1
  1. 1.Oak Ridge National LaboratoryOak RidgeUSA

Personalised recommendations