Abstract
Processor virtualization is a powerful technique that enables the runtime system to carry out intelligent adaptive optimizations like dynamic resource management. Charm++ is an early language/system that supports processor virtualization. This paper describes Adaptive MPI or AMPI, an MPI implementation and extension, that supports processor virtualization. AMPI implements virtual MPI processes (VPs), several of which may be mapped to a single physical processor. AMPI includes a powerful runtime support system that takes advantage of the degree of freedom afforded by allowing it to assign VPs onto processors. With this runtime system, AMPI supports such features as automatic adaptive overlap of communication and computation and automatic load balancing. It can also support other features such as checkpointing without additional user code, and the ability to shrink and expand the set of processors used by a job at runtime. This paper describes AMPI, its features, benchmarks that illustrate performance advantages and tradeoffs offered by AMPI, and application experiences.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Fox, G., Williams, R., Messina, P.: Parallel Computing Works. Morgan Kaufman, San Francisco (1994)
Naik, V.K., Setia, S.K., Squillante, M.S.: Processor allocation in multiprogrammed distributed-memory parallel computer systems. Journal of Parallel and Distributed Computing (1997)
Kalé, L., Krishnan, S.: CHARM++: A Portable Concurrent Object Oriented System Based on C++. In: Paepcke, A. (ed.) Proceedings of OOPSLA 1993, pp. 91–108. ACM Press, New York (1993)
Kale, L.V., Krishnan, S.: Charm++: Parallel Programming with Message-Driven Objects. In: Wilson, G.V., Lu, P. (eds.) Parallel Programming using C++, pp. 175–213. MIT Press, Cambridge (1996)
Gropp, W., Lusk, E., Doss, N., Skjellum, A.: Mpich: A high-performance, portable implementation of the mpi message passing interface standard. Parallel Computing 22, 789–828 (1996)
Burns, G., Daoud, R., Vaigl, J.: Lam: An open cluster environment for mpi. In: Proceedings of Supercomputing Symposium 1994, Toronto, Canada (1994)
Stellner, G.: CoCheck: Checkpointing and Process Migration for MPI. In: Proceedings of the 10th International Parallel Processing Symposium (IPPS 1996), Honolulu, Hawaii (1996)
Agbaria, A., Friedman, R.: StarFish: Fault-tolerant dynamic mpi programs on clusters of workstations. In: 8th IEEE International Symposium on High Performance Distributed Computing (1999)
MPI-Lite, Parallel Computing Lab, University of California, http://may.csucla.edu/projects/sesame/mpi_lite/mpi_lite.html
Tang, H., Shen, K., Yang, T.: Program transformation and runtime support for threaded MPI execution on shared-memory machines. ACM Transactions on Programming Languages and Systems 22, 673–700 (2000)
Kalé, L.V.: The virtualization model of parallel programming: Runtime optimizations and the state of art. In: LACSI 2002, Albuquerque (2002)
Lawlor, O., Kalé, L.V.: Supporting dynamic parallel object arrays. In: Proceedings of ACM 2001 Java Grande/ISCOPE Conference, Stanford, CA, pp. 21–29 (2001)
Kale, L.V., Bhandarkar, M., Brunner, R.: Run-time systems for parallel programming. In: Rolim, J.D.P. (ed.) IPDPS-WS 2000. LNCS, vol. 1800, pp. 1152–1159. Springer, Heidelberg (2000)
Brunner, R.K., Kalé, L.V.: Adapting to load on workstation clusters. In: The Seventh Symposium on the Frontiers of Massively Parallel Computation, pp. 106–112. IEEE Computer Society Press, Los Alamitos (1999)
Kalé, L.V., Kumar, S., DeSouza, J.: An adaptive job scheduler for timeshared parallel machines. Technical Report 00-02, Parallel Programming Laboratory, Department of Computer Science, University of Illinois at Urbana-Champaign (2000)
Saboo, N., Singla, A.K., Unger, J.M., Kalé, L.V.: Emulating petaflops machines and blue gene. In: Workshop on Massively Parallel Processing (IPDPS 2001), San Francisco, CA (2001)
Mahesh, K.: Ampizer: An mpi-ampi translator. Master’s thesis, Computer Science Department, University of Illinois at Urbana-Champiagn (2001)
Blume, W., Eigenmann, R., Faigin, K., Grout, J., Hoeflinger, J., Padua, D., Petersen, P., Pottenger, B., Rauchwerger, L., Tu, P., Weatherford, S.: Polaris: Improving the effectiveness of parallelizing compilers. In: Pingali, K.K., Gelernter, D., Padua, D.A., Banerjee, U., Nicolau, A. (eds.) LCPC 1994. LNCS, vol. 892, pp. 141–154. Springer, Heidelberg (1994)
Antoniu, G., Bouge, L., Namyst, R.: An efficient and transparent thread migration scheme in the PM 2 runtime system. In: Rolim, J.D.P. (ed.) IPPS-WS 1999 and SPDP-WS 1999. LNCS, vol. 1586, pp. 496–510. Springer, Heidelberg (1999)
Kale, L.V., Kumar, S., Vardarajan, K.: A framework for collective personalized communication, communicated to ipdps 2003. Technical Report 02-10, Parallel Programming Laboratory, Department of Computer Science, University of Illinois at Urbana-Champaign (2002)
Kale, L.V., Kumar, S.: Scaling collective multicast on high performance clusters. Technical Report 03-04, Parallel Programming Laboratory, Department of Computer Science, University of Illinois at Urbana-Champaign (2003)
IBM Parallel Enviroment for AIX. MPI Subroutine Reference, http://publib.boulder.ibm.com/doc_link/en_US/a_doc_lib/sp34/pe/html/am107mst.html
Stellner, G.: Cocheck: Checkpointing and process migration for mpi. In: Proceedings of the International Parallel and Distributed Processing Symposium, pp. 526–531. IEEE Computer Society Press, Los Alamitos (1996)
Lemieux, Pittsburgh Supercomputing Center, http://www.psc.edu/machines/tcs/lemieux.html
Vadali, R., Kale, L.V., Martyna, G., Tuckerman, M.: Scalable parallelization of ab initio molecular dynamics. Technical report, UIUC, Dept. of Computer Science (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Huang, C., Lawlor, O., Kalé, L.V. (2004). Adaptive MPI. In: Rauchwerger, L. (eds) Languages and Compilers for Parallel Computing. LCPC 2003. Lecture Notes in Computer Science, vol 2958. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24644-2_20
Download citation
DOI: https://doi.org/10.1007/978-3-540-24644-2_20
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-21199-0
Online ISBN: 978-3-540-24644-2
eBook Packages: Springer Book Archive