Abstract
Commercial HPC applications are often run on clusters that use the Microsoft Windows operating system and need an MPI implementation that runs efficiently in the Windows environment. The MPI developer community, however, is more familiar with the issues involved in implementing MPI in a Unix environment. In this paper, we discuss some of the differences in implementing MPI on Windows and Unix, particularly with respect to issues such as asynchronous progress, process management, shared-memory access, and threads. We describe how we implement MPICH2 on Windows and exploit these Windows-specific features while still maintaining large parts of the code common with the Unix version. We also present performance results comparing the performance of MPICH2 on Unix and Windows on the same hardware. For zero-byte MPI messages, we measured excellent shared-memory latencies of 240 and 275 nanoseconds on Unix and Windows, respectively.
This work was supported in part by a grant from Microsoft Corp. and in part by the Office of Advanced Scientific Computing Research, Office of Science, U.S. Department of Energy, under Contract DE-AC02-06CH11357.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Balaji, P., Buntinas, D., Goodell, D., Gropp, W., Thakur, R.: Fine-grained multithreading support for hybrid threaded MPI programming. International Journal of High Performance Computing Applications 24(1), 49–57 (2010)
Buntinas, D., Goglin, B., Goodell, D., Mercier, G., Moreaud, S.: Cache-efficient, intranode, large-message MPI communication with MPICH2-Nemesis. In: Proc. of the 2009 International Conference on Parallel Processing, pp. 462–469 (2009)
Buntinas, D., Mercier, G., Gropp, W.: Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem. In: Proc. of 6th IEEE/ACM Int’l Symp. on Cluster Computing and the Grid (CCGrid) (May 2006)
Deino MPI, http://mpi.deino.net/
Gropp, W., Thakur, R.: Thread safety in an MPI implementation: Requirements and analysis. Parallel Computing 33(9), 595–604 (2007)
Intel MPI, http://software.intel.com/en-us/intel-mpi-library/
Message Passing Interface Forum: MPI: A Message-Passing Interface Standard, Version 2.2 (September 2009), http://www.mpi-forum.org
MPICH2 – A high-performance portable implementation of MPI, http://www.mcs.anl.gov/mpi/mpich2
MPI.NET: A high performance MPI library for.NET applications, http://osl.iu.edu/research/mpi.net/
Microsoft MPI, http://msdn.microsoft.com/en-us/library/bb524831(VS.85).aspx
NetPIPE: A network protocol independent performance evaluator, http://www.scl.ameslab.gov/netpipe/
Network Direct: A low latency RDMA network API for Windows. http://msdn.microsoft.com/en-us/library/cc9043(v=VS.85).aspx
Open MPI, http://www.open-mpi.org
Open Portable Atomics library, https://trac.mcs.anl.gov/projects/openpa/wiki
OSU Micro-Benchmarks (OMB), http://mvapich.cse.ohio-state.edu/benchmarks/
Top500 list (November 2008), http://www.top500.org/lists/2008/11
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Krishna, J., Balaji, P., Lusk, E., Thakur, R., Tiller, F. (2010). Implementing MPI on Windows: Comparison with Common Approaches on Unix. In: Keller, R., Gabriel, E., Resch, M., Dongarra, J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2010. Lecture Notes in Computer Science, vol 6305. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15646-5_17
Download citation
DOI: https://doi.org/10.1007/978-3-642-15646-5_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15645-8
Online ISBN: 978-3-642-15646-5
eBook Packages: Computer ScienceComputer Science (R0)